An Enactive Approach to Value Alignment in Artificial Intelligence: A Matter of Relevance

In Vincent C. Müller (ed.), Philosophy and Theory of AI. Springer Cham. pp. 119-135 (2021)
  Copy   BIBTEX

Abstract

The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. Existing approaches appear to conceive of the problem as "how do we ensure that AI solves the problem in the right way", in order to avoid the possibility of AI turning humans into paperclips in order to “make more paperclips” or eradicating the human race to “solve climate change”. This paper proposes an approach to Alignment rooted in the Enactive theory of mind that reconceptualises it as "how do we make relevant to AI what it relevant to humans". This conceptualisation is supported with a discussion of 4E cognition and goes on to suggest that the Alignment problem and the Frame problem are the same problem. The paper concludes with a discussion of the tradeoffs of the different approaches to value alignment.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2022-11-19

Downloads
350 (#81,207)

6 months
127 (#41,602)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Michael Cannon
Eindhoven University of Technology

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references