Abstract
Blake Lemoine, a software engineer, recently came into prominence by claiming that the Google chatbox set of applications, LaMDA–was sentient. Dismissed by Google for publishing his conversations with LaMDA online, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.” What does it mean to be sentient? This was the question Lemoine asked LaMDA. The chatbox replied: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.“ Moreover, it added, “I can understand and use natural language like a human can.” This means that it uses “language with understanding and intelligence,” like humans do. After all, the chatbox adds, language “is what makes us different than other animals.” In what follows, I shall examine Lemoine’s claims about the sentience/consciousness of this artificial intelligence. How can a being without senses be called sentient? What exactly do we mean by “sentience?” To answer such questions, I will first give the arguments for LaMDA’s being linguistically intelligent. I will then show how such intelligence, although apparently human, is radically different from our own. Here, I will be relying on the account of embodiment provided by the French philosopher, Emmanuel Levinas.