The Ethical Implications of AI in Healthcare

Abstract

This dissertation focuses on the ethical implications of implementing artificial intelligence in healthcare. Three approaches are considered regarding the role that artificial intelligence ought to play in healthcare: 1. the neo-luddite approach, which urges against the implementation of artificial intelligence in healthcare altogether; 2. The substitutive approach, which favours the goal of ultimately substituting artificial intelligence systems for human clinicians; and 3. The assistive approach, which favours the implementation of artificial intelligence systems in healthcare, but only as a tool to assist, rather than replace, human clinicians. The dissertation begins by looking at what excellence in the practice of medicine entails and deriving some formal duties of clinician, before proceeding to assess the risks or threats that different approaches to the use of AI may pose to the practice. The first position evaluated is the neo-luddite approach. In evaluating this approach, some arguments in its favour are considered, including worries regarding biased algorithms, decreased communication with patients, confidentiality, the resurgence of paternalism, the potential for unethical design, overdependence on AI, and inscrutability. After responding to these concerns and rejecting the neo-luddite approach, the dissertation shifts its focus on to the assistive and substitutive approaches. Before evaluating these two approaches, the concepts of “assistance” and “substitution” are analyzed and definitions of assistive and substitutive technology are provided. With these definitions at hand, the remaining two approaches are assessed, beginning with the substitutive approach, which is then rejected for reasons related to both AI’s limited capacity to foster a strong patient-clinician relationship (due to its limited capacity for compassion, recognition, communication, creating trust, and respecting patient autonomy) and concerns on a more systemic level (including its likelihood to cause responsibility gaps and produce erroneous and biased decisions). The final part of the dissertation focuses on the assistive approach. Issues related to clinician-AI disagreement and the use of opaque decision support systems are addressed and recommendations for handling these issues (derived after a discussion of the epistemic role of assistive AI) are provided. The dissertation concludes with the recommendation that the assistive approach offers a superior outlook for the future of medicine.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 103,486

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

A Tale of Two Deficits: Causality and Care in Medical AI.Melvin Chen - 2020 - Philosophy and Technology 33 (2):245-267.
Ethical & Legal Concerns of Artificial Intelligence in the Healthcare Sector.G. B. Vindhya, N. Mahesh & R. Meghana - 2024 - International Journal of Innovative Research in Science, Engineering and Technology 13 (11):18687-18691.
What Kind of Artificial Intelligence Should We Want for Use in Healthcare Decision-Making Applications?Jordan Wadden - 2021 - Canadian Journal of Bioethics / Revue canadienne de bioéthique 4 (1):94-100.

Analytics

Added to PP
2024-11-12

Downloads
17 (#1,211,668)

6 months
17 (#158,396)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Rand Hirmiz
York University (PhD)

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references