Does ChatGPT Have a Mind?

Abstract

This paper examines the question of whether Large Language Models (LLMs) like ChatGPT possess minds, focusing specifically on whether they have a genuine folk psychology encompassing beliefs, desires, and intentions. We approach this question by investigating two key aspects: internal representations and dispositions to act. First, we survey various philosophical theories of representation, including informational, causal, structural, and teleosemantic accounts, arguing that LLMs satisfy key conditions proposed by each. We draw on recent interpretability research in machine learning to support these claims. Second, we explore whether LLMs exhibit robust dispositions to perform actions, a necessary component of folk psychology. We consider two prominent philosophical traditions, interpretationism and representationalism, to assess LLM action dispositions. While we find evidence suggesting LLMs may satisfy some criteria for having a mind, particularly in game-theoretic environments, we conclude that the data remains inconclusive. Additionally, we reply to several skeptical challenges to LLM folk psychology, including issues of sensory grounding, the "stochastic parrots" argument, and concerns about memorization. Our paper has three main upshots. First, LLMs do have robust internal representations. Second, there is an open question to answer about whether LLMs have robust action dispositions. Third, existing skeptical challenges to LLM representation do not survive philosophical scrutiny.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

On the attribution of confidence to large language models.Geoff Keeling & Winnie Street - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
LLMs, Turing tests and Chinese rooms: the prospects for meaning in large language models.Emma Borg - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.

Analytics

Added to PP
2024-06-27

Downloads
986 (#21,608)

6 months
588 (#2,104)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Simon Goldstein
University of Hong Kong
Ben Levinstein
University of Illinois, Urbana-Champaign

References found in this work

No references found.

Add more references