Redefining Human-Centered AI: The human impact of AI-based recommendation engines

In Maria Axente, Jean-Louise Denis, Atsuo Kishimoto & Catherine Régis (eds.), Human-Centered AI: A Multidisciplinary Perspective for Policy-Makers, Auditors, and Users. CRC Press. pp. 34-45 (2024)
  Copy   BIBTEX

Abstract

In recent years, the flood of information has become overwhelming, and we experience constant information overload. Information overload is not new nor unique to our age. It is already well documented in the classical works of Simon (1971) with technological solutions offered even earlier (Bush, 1945) . Some affordances in artificial intelligence (AI), specifically machine learning (ML) algorithms and big-data analytics promise to augment the user's information selection, processing, and decision making to make a less stressful, "frictionless" life (Andrejevic, 2013) . Typically, these solutions combine ML algorithms, such as recommendation engines (for information search, news, music, shopping, etc.), and apply them to the digital data-doppelgänger of the user. Zuboff (2018) claimed that most platforms use this model, named surveillance capitalism, to acquire large profits. While doing so, the technology companies construct the political, sociological, and economic spheres to fit the surveillance capitalism model. Although discussion of those implications is of immense importance, it is ignoring the question at the title of this chapter – Can AI-based information processing algorithms, aimed to augment and even replace our internal cognitive processing, be considered human-centered AI? To answer this question, the chapter will survey the effects of using recommendation engines based on data-doppelgängers on the users' sense of self. The chapter will use prior research to focus on four key areas of influence on users' self-perception: autonomy, agency, rationality, and memory (Bar-Gil, 2020) . It will suggest that although these technologies might afford new capabilities to augment human users’ engagement in the information sphere, they have a considerable effect on the users – diminishing their agency, making them less autonomous, more confirmative to algorithmic rationality, resulting in changes to their memory patterns.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,551

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Risks Deriving from the Agential Profiles of Modern AI Systems.Barnaby Crook - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr (eds.), Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.

Analytics

Added to PP
2024-04-10

Downloads
0

6 months
0

Historical graph of downloads

Sorry, there are not enough data points to plot this chart.
How can I increase my downloads?

Author's Profile

Oshri Bar-gil
Bar-Ilan University, Ramat Gan

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references