Robustness to Fundamental Uncertainty in AGI Alignment

Journal of Consciousness Studies 27 (1-2):225-241 (2020)
  Copy   BIBTEX

Abstract

The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions to reduce the risk of false positives. Herein we explore in detail two relevant points of uncertainty that AGI alignment research hinges on---meta-ethical uncertainty and uncertainty about mental phenomena---and show how to reduce false positives in response to them.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2020-02-14

Downloads
383 (#75,446)

6 months
67 (#87,361)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

G Gordon Worley III
Phenomenological AI Safety Research Institute

Citations of this work

No citations found.

Add more citations

References found in this work

The reliability of sense perception.William P. Alston - 1993 - Ithaca, N.Y.: Cornell University Press.
.Nick Bostrom & Julian Savulescu - 2007 - Oxford University Press.
Intransitivity of preferences.Amos Tversky - 1969 - Psychological Review 76 (1):31-48.
The problem of the criterion.Roderick M. Chisholm - 1973 - Milwaukee,: Marquette University Press.

View all 8 references / Add more references