Is There a Trade-Off Between Human Autonomy and the ‘Autonomy’ of AI Systems?

In Conference on Philosophy and Theory of Artificial Intelligence. Springer International Publishing. pp. 67-71 (2022)
  Copy   BIBTEX

Abstract

Autonomy is often considered a core value of Western society that is deeply entrenched in moral, legal, and political practices. The development and deployment of artificial intelligence (AI) systems to perform a wide variety of tasks has raised new questions about how AI may affect human autonomy. Numerous guidelines on the responsible development of AI now emphasise the need for human autonomy to be protected. In some cases, this need is linked to the emergence of increasingly ‘autonomous’ AI systems that can perform tasks without human control or supervision. Do such ‘autonomous’ systems pose a risk to our own human autonomy? In this article, I address the question of a trade-off between human autonomy and system ‘autonomy’.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2023-08-07

Downloads
1,717 (#9,038)

6 months
510 (#2,860)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations

References found in this work

The Theory and Practice of Autonomy.Gerald Dworkin - 1988 - New York: Cambridge University Press.
Freedom of the will and the concept of a person.Harry Frankfurt - 2004 - In Tim Crane & Katalin Farkas, Metaphysics: a guide and anthology. New York: Oxford University Press.
Autonomy in moral and political philosophy.John Christman - 2008 - Stanford Encyclopedia of Philosophy.

View all 7 references / Add more references