Perceived responsibility in AI-supported medicine

AI and Society:1-11 (forthcoming)
  Copy   BIBTEX

Abstract

In a representative vignette study in Germany with 1,653 respondents, we investigated laypeople’s attribution of moral responsibility in collaborative medical diagnosis. Specifically, we compare people’s judgments in a setting in which physicians are supported by an AI-based recommender system to a setting in which they are supported by a human colleague. It turns out that people tend to attribute moral responsibility to the artificial agent, although this is traditionally considered a category mistake in normative ethics. This tendency is stronger when people believe that AI may become conscious at some point. In consequence, less responsibility is attributed to human agents in settings with hybrid diagnostic teams than in settings with human-only diagnostic teams. Our findings may have implications for behavior exhibited in contexts of collaborative medical decision making with AI-based as opposed to human recommenders because less responsibility is attributed to agents who have the mental capacity to care about outcomes.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,394

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-05-25

Downloads
33 (#686,715)

6 months
13 (#261,362)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Allison Fritz
Chadron State College

References found in this work

No references found.

Add more references