First-person representations and responsible agency in AI

Synthese 199 (3-4):7061-7079 (2021)
  Copy   BIBTEX

Abstract

In this paper I investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, I identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind of agency to AISs. This is because this type of awareness is thought to involve first-person or de se representations, which, in turn, are usually assumed to involve some form of consciousness. I clarify what this widespread assumption involves and conclude that the possibility of AISs’ moral responsibility hinges on what the correct theory of de se representations ultimately turns out to be.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 103,449

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2021-03-20

Downloads
92 (#235,632)

6 months
14 (#181,413)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Miguel Angel Sebastian
National Autonomous University of Mexico

References found in this work

What is it like to be a bat?Thomas Nagel - 1974 - Philosophical Review 83 (4):435-50.
Intention.G. E. M. Anscombe - 1957 - Cambridge: Harvard University Press.
Epistemology and cognition.Alvin I. Goldman - 1986 - Cambridge: Harvard University Press.
Freedom and Resentment.Peter Strawson - 1962 - Proceedings of the British Academy 48:187-211.

View all 72 references / Add more references