Large language models belong in our social ontology

In Anna Strasser (ed.), Anna's AI Anthology. How to live with smart machines? Berlin: Xenomoi Verlag (2024)
  Copy   BIBTEX

Abstract

The recent advances in Large Language Models (LLMs) and their deployment in social settings prompt an important philosophical question: are LLMs social agents? This question finds its roots in the broader exploration of what engenders sociality. Since AI systems like chatbots, carebots, and sexbots are expanding the pre-theoretical boundaries of our social ontology, philosophers have two options. One is to deny LLMs membership in our social ontology on theoretical grounds by claiming something along the lines that only organic or X-type creatures belong in our social world. Second, expand our ontological boundaries. I take the second route and claim LLMs implemented as social chatbots are social agents. Utilizing Brian Epstein's concepts of grounding and anchoring, alongside Dee Peyton's criteria for what makes a property social, I posit that the capacity for conversations is a social property. Further, within a framework where social agency is considered multi-dimensional, akin to agency, possessing a social property is sufficient for attaining social agency. Thus establishing chatbots as social agents. A helpful concept here is Levels of Abstraction (LoA). The LoA framework allows for extracting relevant and important information from the target domain (the social world). For example, the property of being a 'boy' and the capacity for conversation are social properties; hence, they are part of the social domain. LLMs have the capacity for conversations. So, if we focus on the conversational-LoA and not the gender-LoA, then LLMs occupy a dimension of sociality (i.e., conversational dimensions).

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
Searching for social properties.Dee Payton - 2022 - Philosophy and Phenomenological Research 106 (3):741-754.
Imitation and Large Language Models.Éloïse Boisseau - 2024 - Minds and Machines 34 (4):1-24.

Analytics

Added to PP
2024-06-15

Downloads
140 (#158,242)

6 months
140 (#31,829)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Syed AbuMusab
Yale University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references