On the Ethical and Epistemological Utility of Explicable AI in Medicine

Philosophy and Technology 35 (2):1-31 (2022)
  Copy   BIBTEX

Abstract

In this article, I will argue in favor of both the ethical and epistemological utility of explanations in artificial intelligence -based medical technology. I will build on the notion of “explicability” due to Floridi, which considers both the intelligibility and accountability of AI systems to be important for truly delivering AI-powered services that strengthen autonomy, beneficence, and fairness. I maintain that explicable algorithms do, in fact, strengthen these ethical principles in medicine, e.g., in terms of direct patient–physician contact, as well as on a longer-term epistemological level by facilitating scientific progress that is informed through practice. With this article, I will therefore attempt to counter arguments against demands for explicable AI in medicine that are based on a notion of “whatever heals is right.” I will elucidate my elaboration on the positive aspects of explicable AI in medicine as well as by pointing out risks of non-explicable AI.

Other Versions

No versions found

Links

PhilArchive

    This entry is not archived by us. If you are the author and have permission from the publisher, we recommend that you archive it. Many publishers automatically grant permission to authors to archive pre-prints. By uploading a copy of your work, you will enable us to better index it, making it easier to find.

    Upload a copy of this work     Papers currently archived: 103,836

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2022-05-30

Downloads
48 (#497,726)

6 months
14 (#217,813)

Historical graph of downloads
How can I increase my downloads?