Abstract
> Stephan, a middle-aged immigrant, presents with acute sepsis from his diabetic foot ulcer and confusion that significantly diminishes his capacity. His attending physician, Marta, has initiated an aggressive antibiotic regime but is considering amputation. Unsure, whether this aligns with the preferences of Stephan, his family, and his devoted support worker, Emily, she is considering using a P4 AI for substituted judgment using a corpus of Stephan’s emails and online posts. > > When diagnosed with diabetes shortly after immigrating seventeen years ago, Stephan railed against the disease’s ravages online, declaiming that he would rather be dead than blind or immobile. Five years ago, anxiety over disturbing trends in online discourse led to his abruptly ceasing all online communication, turning increasingly to Emily as a confidant and guide to advances in vision treatment and prosthetics. And two years ago, Stephan gained significantly new purpose and joy with the birth of his first grandchild Anya. This case, following Annoni,1 helps illustrate the potential use of P4 artificial intelligence (AI) to advance moral goods in a substituted judgement setting by (1) honouring the patient’s unique identity and past agency and (2) reducing the burdens on human surrogate decision-makers. However, against (2), we argue that decision-making discourse …