Plurale Autorschaft von Mensch und Künstlicher Intelligenz?
Abstract
This paper (in German) discusses the question of what is going on when large language models (LLMs) produce meaningful text in reaction to human prompts. Can LLMs be understood as authors or producers of speech acts? I argue that this question has to be answered in the negative, for two reasons. First, due to their lack of semantic understanding, LLMs do not understand what they are saying and hence literally do not know what they are (linguistically) doing. Since the agent’s knowing what they are doing is a necessary condition of action, LLMs cannot be said to produce speech acts. Secondly, agency conceptually implies accepting accountability for one’s doings, which in turn implies some kind of practical commitment to live up to that accountability. Since LLMs lack any kind of practical commitment, they cannot be said to produce speech acts. I then assess two proposals of how better to understand the contribution of LLMs to human discourse. According to Anna Strasser and Eric Schwitzgebel, they should be understood as junior partners in asymmetric joint action. I claim that this suggestion founders on the same objections as rehearsed above. According to Alice Helliwell, they should be understood as a radically new kind of component of our (human) extended mind. Although I do have reservations about this suggestion as well, it seems the most promising at this point.