Redefining Human-Centered AI: The human impact of AI-based recommendation engines
Abstract
In recent years, the flood of information has become overwhelming, and we experience constant information overload. Information overload is not new nor unique to our age. It is already well documented in the classical works of Simon (1971) with technological solutions offered even earlier (Bush, 1945) . Some affordances in artificial intelligence (AI), specifically machine learning (ML) algorithms and big-data analytics promise to augment the user's information selection, processing, and decision making to make a less stressful, "frictionless" life (Andrejevic, 2013) . Typically, these solutions combine ML algorithms, such as recommendation engines (for information search, news, music, shopping, etc.), and apply them to the digital data-doppelgänger of the user. Zuboff (2018) claimed that most platforms use this model, named surveillance capitalism, to acquire large profits. While doing so, the technology companies construct the political, sociological, and economic spheres to fit the surveillance capitalism model. Although discussion of those implications is of immense importance, it is ignoring the question at the title of this chapter – Can AI-based information processing algorithms, aimed to augment and even replace our internal cognitive processing, be considered human-centered AI? To answer this question, the chapter will survey the effects of using recommendation engines based on data-doppelgängers on the users' sense of self. The chapter will use prior research to focus on four key areas of influence on users' self-perception: autonomy, agency, rationality, and memory (Bar-Gil, 2020) . It will suggest that although these technologies might afford new capabilities to augment human users’ engagement in the information sphere, they have a considerable effect on the users – diminishing their agency, making them less autonomous, more confirmative to algorithmic rationality, resulting in changes to their memory patterns.