Abstract
In their Target Article, Rahimzadeh et al. (2023) discuss the virtues and vices of employing ChatGPT in ethics education for healthcare professionals. To this end, they confront the chatbot with a moral dilemma and analyse its response. In interpreting the case, ChatGPT relies on Beauchamp and Childress’ four prima-facie principles: beneficence, non-maleficence, respect for patient autonomy, and justice. While the chatbot’s output appears admirable at first sight, it is worth taking a closer look: ChatGPT not only misses the point when applying the principle of non-maleficence; its response also fails, in several places, to honour patient autonomy – a flaw that should be taken seriously if large language models are to be employed in ethics education. I therefore subject ChatGPT’s reply to detailed scrutiny and point out where it went astray.