Abstract
Critics of clinical artificial intelligence (AI) suggest that the technology is ethically harmful because it may lead to the dehumanization of the doctor–patient relationship (DPR) by eliminating moral empathy, which is viewed as a distinctively human trait. The benefits of clinical empathy—that is, moral empathy applied in the clinical context—are widely praised, but this praise is often unquestioning and lacks context. In this article, I will argue that criticisms of clinical AI based on appeals to empathy are misplaced. As psychological and philosophical research has shown, empathy leads to certain types of biased reasoning and choices. These biases of empathy consistently impact the DPR. Empathy may lead to partial judgments and asymmetric DPRs, as well as disparities in the treatment of patients, undermining respect for patient autonomy and equality. Engineers should consider the flaws of empathy when designing affective artificial systems in the future. The nature of sympathy and compassion (i.e., displaying emotional concern while maintaining some balanced distance) has been defended by some ethicists as more beneficial than perspective-taking in the clinical context. However, these claims do not seem to have impacted the AI debate. Thus, this article will also argue that if machines are programmed for affective behavior, they should also be given some ethical scaffolding.