ChatGPT’s Responses to Dilemmas in Medical Ethics: The Devil is in the Details

American Journal of Bioethics 23 (10):63-65 (2023)
  Copy   BIBTEX

Abstract

In their Target Article, Rahimzadeh et al. (2023) discuss the virtues and vices of employing ChatGPT in ethics education for healthcare professionals. To this end, they confront the chatbot with a moral dilemma and analyse its response. In interpreting the case, ChatGPT relies on Beauchamp and Childress’ four prima-facie principles: beneficence, non-maleficence, respect for patient autonomy, and justice. While the chatbot’s output appears admirable at first sight, it is worth taking a closer look: ChatGPT not only misses the point when applying the principle of non-maleficence; its response also fails, in several places, to honour patient autonomy – a flaw that should be taken seriously if large language models are to be employed in ethics education. I therefore subject ChatGPT’s reply to detailed scrutiny and point out where it went astray.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,795

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

A Chinese perspective on the concept of common morality by Beauchamp and Childress.Yanguang Wang - 2017 - Eubios Journal of Asian and International Bioethics 27 (4):132-134.
The bioethical principles and Confucius' moral philosophy.D. F.-C. Tsai - 2005 - Journal of Medical Ethics 31 (3):159-163.
Not just autonomy--the principles of American biomedical ethics.S. Holm - 1995 - Journal of Medical Ethics 21 (6):332-338.

Analytics

Added to PP
2023-10-11

Downloads
131 (#168,974)

6 months
17 (#179,757)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Lukas J. Meier
Harvard University

Citations of this work

No citations found.

Add more citations