Abstract
Discussions of artificial intelligence have been shaped by two brilliant thought-experiments: Turing’s Imitation Test for thinking systems and Searle’s Chinese Room Argument. In many ways, debates about large language models (LLMs) struggle to move beyond these original, opposing thought-experiments. So, in this paper, I ask whether we can move debate forward by exploring the features Sceptics about LLM abilities take to ground meaning. Section 1 sketches the options, while Sections 2 and 3 explore the common requirement for a robust relation between linguistic signs and external objects. Section 3 argues that concerns about worldly connections can be met and thus that outputs of LLMs should be viewed as genuinely meaningful. Section 4 then turns to the argument that the kind of derived meaning explored in Section 3 is insufficient for LLMs to count as meaningful systems per se, looking at the claim that they must possess original intentionality before we admit them to the space of meaning. I argue that this demand is not a prerequisite for meaning per se, but rather for some kinds of agency or conscious understanding. I suggest that the demands for original intentionality are not currently met by LLMs and that we should be cautious about whether we want them to be.