Abstract
In their article ‘Consent-GPT: is it ethical to delegate procedural consent to conversational AI?’, Allen et al 1 explore the ethical complexities involved in handing over parts of the process of obtaining medical consent to conversational Artificial Intelligence (AI) systems, that is, AI-driven large language models (LLMs) trained to interact with patients to inform them about upcoming medical procedures and assist in the process of obtaining informed consent.1 They focus specifically on challenges related to accuracy (4–5), trust (5), privacy (5), click-through consent (5) and responsibility (5–6), alongside some pragmatic considerations (6). While the authors competently navigate these critical issues and present several key perspectives, we posit that their discussion on trust in what they refer to as ‘Consent-GPT’ significantly underestimates one vital factor: the interpersonal aspect of trust. Admittedly, this …