Abstract
Voice-based, spoken interaction with artificial agents has become a part of everyday life in many countries: artificial voices guide us through our bank’s customer service, Amazon’s Alexa tells us which groceries we need to buy, and we can discuss central motifs in Shakespeare’s work with ChatGPT. Language, which is largely still seen as a uniquely human capacity, is now increasingly produced—or so it appears—by non-human entities, contributing to their perception as being ‘human-like.’ The capacity for language is far from the only prototypically human feature attributed to ‘speaking’ machines; their potential agency, consciousness, and even sentience have been widely discussed in the media. This paper argues that a linguistic analysis of agency (based on semantic roles) and animacy can provide meaningful insights into the sociocultural conceptualisations of artificial entities as humanlike actors. A corpus-based analysis investigates the varying attributions of agency to the voice user interfaces Alexa, Siri, and Google Assistant in German media data. The analysis provides evidence for the important role that linguistic anthropomorphisation plays in the sociocultural attribution of agency and consciousness to artificial technological entities, and how particularly the practise of using personal names for these devices contributes to the attribution of humanlikeness: it will be highlighted how Amazon’s Alexa and Apple’s Siri are linguistically portrayed as sentient entities who listen, act, and have a mind of their own, whilst the lack of a personal name renders the Google Assistant much more recalcitrant to anthropomorphism.