Abstract
Moving from a behavioral definition of intelligence, which describes it as the ability to adapt to the surrounding environment and deal effectively with new situations (Anastasi, 1986), this paper explains to what extent the performance obtained by ChatGPT in the linguistic domain can be considered as intelligent behavior and to what extent they cannot. It also explains in what sense the hypothesis of decoupling between cognitive and problem-solving abilities, proposed by Floridi (2017) and Floridi and Chiriatti (2020) should be interpreted. The problem of symbolic grounding (Harnad, 1990) is then addressed to show the problematic relationship between ChatGPT and the natural environment, and thus the impossibility for it to understand the symbols it manipulates. To explain the reasons why ChatGPT does not succeed in this task, an investigation is carried out and a possible solution to the problem in the artificial domain is proposed by making a comparison with the natural ability of living beings to ground their own meanings from some basic cognitive-sensory aspects, which, it is explained, are directly related to the emergence of self-awareness in humans. Thus, the question is raised whether a possible and concrete solution to the Symbol Grounding Problem would involve in the artificial domain the development of cognitive abilities fully comparable to those of humans. Finally, I explain the difficulties that would have to be overcome before such a level could be reached, since human cognitive capacities are intimately linked to intersubjectivity and intercorporeality.