Abstract
A number of scholars and policy-makers have raised serious concerns about the impact of chatbots and generative artificial intelligence (AI) on the spread of political disinformation. An increasingly popular proposal to address this concern is to pass laws that, by requiring that artificially generated and artificially disseminated content be labeled as such, aim to ensure a degree of transparency in this rapidly transforming environment. This article argues that such laws are misguided, for two reasons. We first aim to show that legally requiring the disclosure of the automated nature of bot accounts and AI-generated content is unlikely to succeed in improving the quality of political discussion on social media. This is because information that an account spreading or creating political information is a bot or a language model is itself politically relevant information, and people reason very poorly about such information. Second, we aim to show that the main motivation for these laws – the threat of coordinated disinformation campaigns (automated or not) – appears overstated.