Uncovering the gap: challenging the agential nature of AI responsibility problems

AI and Ethics:1-14 (2025)
  Copy   BIBTEX

Abstract

In this paper, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies. Depending on which conception of AI agency we begin with, the responsibility gap will boil down to one of these two moral problems. Moreover, I will adduce that this conclusion reveals an underlying weakness in AI ethics: the lack of attention to the question of the disciplinary boundaries of AI ethics. This absence has made it difficult to identify the specifics of the responsibility gap arising from new AI systems as compared to the responsibility gaps of other applied ethics. Lastly, I will be concerned with outlining these specific aspects.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2025-03-20

Downloads
41 (#598,960)

6 months
41 (#110,631)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations

References found in this work

Group agency: the possibility, design, and status of corporate agents.Christian List & Philip Pettit - 2011 - New York: Oxford University Press. Edited by Philip Pettit.
Responsibility From the Margins.David Shoemaker - 2015 - Oxford, GB: Oxford University Press.
Pandora’s hope.Bruno Latour - 1999 - Cambridge: Harvard University Press.

View all 55 references / Add more references