Abstract
Large language models and other generative artificial intelligence systems are achieving increasingly impressive results, though the quality of those results still seems dull and uninspired. This paper argues that this poor quality can be linked to the philosophical notion of inauthenticity as presented by Kierkegaard, Nietzsche, and Heidegger, and that this inauthenticity is fundamentally grounded in the design and structure of such systems by virtue of the way they statistically level down the materials on which they are trained. Although it seems possible to create the conditions for authenticity in these systems, the resulting authenticity would be grounded in machine intelligence, not human intelligence. The argument extends the criticisms of artificial intelligence articulated by Hubert Dreyfus, updated to account for recent developments in machine learning and artificial neural networks. While more optimistic concerning the prospects for successfully creating artificial intelligence than Dreyfus had been, this paper argues that the resulting intelligence may not align well with human intelligence and may not be desirable for humans, if it is fully authentic.