A Loosely Wittgensteinian Conception of the Linguistic Understanding of Large Language Models like BERT, GPT-3, and ChatGPT

Grazer Philosophische Studien 99 (4):485-523 (2023)
  Copy   BIBTEX

Abstract

In this article, I develop a loosely Wittgensteinian conception of what it takes for a being, including an AI system, to understand language, and I suggest that current state of the art systems are closer to fulfilling these requirements than one might think. Developing and defending this claim has both empirical and conceptual aspects. The conceptual aspects concern the criteria that are reasonably applied when judging whether some being understands language; the empirical aspects concern the question whether a given being fulfills these criteria. On the conceptual side, the article builds on Glock’s concept of intelligence, Taylor’s conception of intrinsic rightness as well as Wittgenstein’s rule-following considerations. On the empirical side, it is argued that current transformer-based NNLP models, such as BERT and GPT-3 come close to fulfilling these criteria.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,551

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2023-04-20

Downloads
215 (#119,038)

6 months
57 (#96,111)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Reto Gubelmann
University of Zürich

References found in this work

What is it like to be a bat?Thomas Nagel - 1974 - Philosophical Review 83 (4):435-50.
Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
Word and Object.Willard Van Orman Quine - 1960 - Les Etudes Philosophiques 17 (2):278-279.

View all 18 references / Add more references