Abstract
Large language models in the form of chatbots very realistically imitate a dialogue as an omniscient interlocutor and therefore have become widespread. But even Google in its Gemini chatbot does not recommend trusting what the chatbot will write and asks to check its answers. In this review, various types of LLM errors such as the curse of inversion, number processing, etc. will be analyzed to identify their causes. Such an analysis led to the conclusion about the common causes of all errors, which is that transformers do not have deep analogy, hierarchy of schemes and selectivity of content taken into account in the inference. But the most important conclusion is that transformers, like other neural networks, are built on the concept of processing the input signal, which creates a strong dependence on superficial noise and irrelevant information that the transformer’s attention layer cannot compensate for. The concept of neural networks was laid down in the 1950s by the idea of F. Rosenblatt’s perceptron and did not take into account the achievements of cognitive psychology that appeared later. According to the constructivist paradigm, the input word (or perception) is only a way to check the correctness of the constructed predictive model for possible situations. This is the cause of the biggest problem of transformers, called hallucinations. And its elimination is possible only by changing the architecture of the neural network, but not by increasing the amount of data in training.