Do Transformers Dream of Real Sheep?Exploring the Unconscious of LLMs through Žižek’s Psychoanalytic Semiotic Lens

International Journal of Žižek Studies 19 (1) (2025)
  Copy   BIBTEX

Abstract

In his 2020 work, _ Hegel in a Wired Brain_, Žižek explores whether digital machines can comprehend the unconscious as the surplus of language. According to him, even if future digital machines could decode and comprehend all human thoughts and discourse, they would remain incapable of capturing the unconscious dimension that is retroactively constituted within the chain of signifiers. In recent years, with the remarkable advancements achieved by large language models (LLMs) — such as ChatGPT (Generative Pre-trained Transformer) — in the field of natural language processing, these models have emerged as the most promising approaches in artificial intelligence and have become significant subjects of philosophical reflection. Consequently, the pressing question now arises: Can LLMs comprehend, or even possess, their own unconscious?To address these questions,t his paper will first provide a brief technical overview of LLMs, then outline Žižek’s interpretation of the unconscious via Lacanian psychoanaly tic semiotics. Based on several intriguing experiments—such as questioning LLMs about their understanding of the coffee joke and requesting them to generate similar jokes—this study will analyze the unconscious of LLMs. The conclusion will be cautionary: while LLMs do not yet possess a human-equivalent unconscious, they can comprehend and partially access this paradoxical space.Rather than focusing on the traditional issue of whether AI can understand semantics as opposed to syntax, this paper centers on the unconscious third dimension of “ undead ” that lies between semantics and syntax, and whether LLMs can understand this retroactively generated paradoxical viod within the chain of signifiers.

Other Versions

No versions found

Links

PhilArchive

    This entry is not archived by us. If you are the author and have permission from the publisher, we recommend that you archive it. Many publishers automatically grant permission to authors to archive pre-prints. By uploading a copy of your work, you will enable us to better index it, making it easier to find.

    Upload a copy of this work     Papers currently archived: 103,748

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
Artificial consciousness.Adrienne Prettyman - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.

Analytics

Added to PP
2025-03-12

Downloads
0

6 months
0

Historical graph of downloads

Sorry, there are not enough data points to plot this chart.
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references