AI safety: a climb to Armageddon?

Philosophical Studies (forthcoming)
  Copy   BIBTEX

Abstract

This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: Optimism, Mitigation, and Holism. Each faces challenges stemming from intrinsic features of the AI safety landscape that we term Bottlenecking, the Perfection Barrier, and Equilibrium Fluctuation. The surprising robustness of the argument forces a reexamination of core assumptions around AI safety and points to several avenues for further research.

Other Versions

manuscript Cappelen, Herman; Josh, Dever; John, Hawthorne (manuscript) "AI Safety: A Climb To Armageddon?".

Links

PhilArchive

    This entry is not archived by us. If you are the author and have permission from the publisher, we recommend that you archive it. Many publishers automatically grant permission to authors to archive pre-prints. By uploading a copy of your work, you will enable us to better index it, making it easier to find.

    Upload a copy of this work     Papers currently archived: 103,748

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2025-03-07

Downloads
53 (#441,169)

6 months
53 (#101,798)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Herman Cappelen
University of Hong Kong
John Hawthorne
University of Southern California

Citations of this work

No citations found.

Add more citations