Why don’t transformers think like humans?

Philosophical Problems of IT and Cyberspace (PhilITandC) 2:87-98 (2025)
  Copy   BIBTEX

Abstract

Large language models in the form of chatbots very realistically imitate a dialogue as an omniscient interlocutor and therefore have become widespread. But even Google in its Gemini chatbot does not recommend trusting what the chatbot will write and asks to check its answers. In this review, various types of LLM errors such as the curse of inversion, number processing, etc. will be analyzed to identify their causes. Such an analysis led to the conclusion about the common causes of all errors, which is that transformers do not have deep analogy, hierarchy of schemes and selectivity of content taken into account in the inference. But the most important conclusion is that transformers, like other neural networks, are built on the concept of processing the input signal, which creates a strong dependence on superficial noise and irrelevant information that the transformer’s attention layer cannot compensate for. The concept of neural networks was laid down in the 1950s by the idea of F. Rosenblatt’s perceptron and did not take into account the achievements of cognitive psychology that appeared later. According to the constructivist paradigm, the input word (or perception) is only a way to check the correctness of the constructed predictive model for possible situations. This is the cause of the biggest problem of transformers, called hallucinations. And its elimination is possible only by changing the architecture of the neural network, but not by increasing the amount of data in training.

Other Versions

original Хомяков, А. Б (2025) "Why don’t transformers think like humans?". Philosophical Problems of IT and Cyberspace (PhilIT&C) 2():87-98

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,607

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Why don’t transformers think like humans?А. Б Хомяков - 2025 - Philosophical Problems of IT and Cyberspace (PhilIT&C) 2:87-98.
Assessing the Strengths and Weaknesses of Large Language Models.Shalom Lappin - 2023 - Journal of Logic, Language and Information 33 (1):9-20.
Evolution of natural language processing methods.А. Ю Беседина - 2025 - Philosophical Problems of IT and Cyberspace (PhilITandC) 2:52-63.
Enhanced Image Captioning Using CNN and Transformers with Attention Mechanism.Ch Vasavi - 2024 - International Journal of Engineering Innovations and Management Strategies 1 (1):1-12.
Foundations of Generative AI.Ken Huang, Yang Wang & Xiaochen Zhang - 2024 - In Ken Huang, Yang Wang, Ben Goertzel, Yale Li, Sean Wright & Jyoti Ponnapalli (eds.), Generative AI Security: Theories and Practices. Springer Nature Switzerland. pp. 3-30.

Analytics

Added to PP
2025-01-18

Downloads
2 (#1,890,538)

6 months
2 (#1,683,984)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references