Beyond algorithmic trust: interpersonal aspects on consent delegation to LLMs

Journal of Medical Ethics 50 (2):139-139 (2024)
  Copy   BIBTEX

Abstract

In their article ‘Consent-GPT: is it ethical to delegate procedural consent to conversational AI?’, Allen et al 1 explore the ethical complexities involved in handing over parts of the process of obtaining medical consent to conversational Artificial Intelligence (AI) systems, that is, AI-driven large language models (LLMs) trained to interact with patients to inform them about upcoming medical procedures and assist in the process of obtaining informed consent.1 They focus specifically on challenges related to accuracy (4–5), trust (5), privacy (5), click-through consent (5) and responsibility (5–6), alongside some pragmatic considerations (6). While the authors competently navigate these critical issues and present several key perspectives, we posit that their discussion on trust in what they refer to as ‘Consent-GPT’ significantly underestimates one vital factor: the interpersonal aspect of trust. Admittedly, this …

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,636

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Using informed consent to save trust.Nir Eyal - 2014 - Journal of Medical Ethics 40 (7):437-444.
Why is informed consent important?Rebecca Roache - 2014 - Journal of Medical Ethics 40 (7):435-436.
Trust but verify.Sissela Bok - 2014 - Journal of Medical Ethics 40 (7):446-446.

Analytics

Added to PP
2024-01-10

Downloads
29 (#780,663)

6 months
10 (#422,339)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations