Abstract
Most current discussions around ‘AI’ frame these technologies as tools or assistants at the service of human actors. While convenient, these metaphors might obscure the fact that the effective training of Large Language Models and other kinds of machine learning systems require intensive data scraping and processing only available to the largest tech companies in the world. With that in mind, this essay seeks to examine the effects of generative AI in the field of arts, culture, and creativity by comparing these systems to social institutions. We contend that, similar to the modern museum, neural networks enable protocols of governance and dispersion that amplify the purchase of certain cultural signals. Operating at scale, these technologies function much like an archive, which Michel Foucault examined as a locus for the accumulation and exercise of power. We argue that, if left unchecked, AI models might provoke a significant skewing of whole socio-cultural milieus according to their statistical rationality, toward what Hito Steyerl described as ‘mean images.’ We demonstrate how this skewing takes place in art projects such as Nora Al-Badri’s Babylonian Visions, which seems to reproduce a kind of essentialization typical of a colonial episteme, disconnecting visual patterns from historical circumstances and ways of doing. In conclusion, we propose that a comparison between AI models and the modern museum may shed new light on issues of data extractivism, cultural expropriation, and the assimilation of otherness through stereotypification and homogenization as accomplished by algorithmic means.