Results for 'medical AI'

973 found
Order:
  1.  60
    Randomised controlled trials in medical AI: ethical considerations.Thomas Grote - 2022 - Journal of Medical Ethics 48 (11):899-906.
    In recent years, there has been a surge of high-profile publications on applications of artificial intelligence (AI) systems for medical diagnosis and prognosis. While AI provides various opportunities for medical practice, there is an emerging consensus that the existing studies show considerable deficits and are unable to establish the clinical benefit of AI systems. Hence, the view that the clinical benefit of AI systems needs to be studied in clinical trials—particularly randomised controlled trials (RCTs)—is gaining ground. However, an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  2. Trustworthy medical AI systems need to know when they don’t know.Thomas Grote - forthcoming - Journal of Medical Ethics.
    There is much to learn from Durán and Jongsma’s paper.1 One particularly important insight concerns the relationship between epistemology and ethics in medical artificial intelligence. In clinical environments, the task of AI systems is to provide risk estimates or diagnostic decisions, which then need to be weighed by physicians. Hence, while the implementation of AI systems might give rise to ethical issues—for example, overtreatment, defensive medicine or paternalism2—the issue that lies at the heart is an epistemic problem: how can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  3. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  5.  33
    Randomized Controlled Trials in Medical AI.Konstantin Genin & Thomas Grote - 2021 - Philosophy of Medicine 2 (1).
    Various publications claim that medical AI systems perform as well, or better, than clinical experts. However, there have been very few controlled trials and the quality of existing studies has been called into question. There is growing concern that existing studies overestimate the clinical benefits of AI systems. This has led to calls for more, and higher-quality, randomized controlled trials of medical AI systems. While this a welcome development, AI RCTs raise novel methodological challenges that have seen little (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  6. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3).
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  9
    Towards trustworthy medical AI ecosystems – a proposal for supporting responsible innovation practices in AI-based medical innovation.Christian Herzog, Sabrina Blank & Bernd Carsten Stahl - forthcoming - AI and Society:1-21.
    In this article, we explore questions about the culture of trustworthy artificial intelligence (AI) through the lens of ecosystems. We draw on the European Commission’s Guidelines for Trustworthy AI and its philosophical underpinnings. Based on the latter, the trustworthiness of an AI ecosystem can be conceived of as being grounded by both the so-called rational-choice and motivation-attributing accounts—i.e., trusting is rational because solution providers deliver expected services reliably, while trust also involves resigning control by attributing one’s motivation, and hence, goals, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  9.  10
    Distribution, Recognition, and Just Medical AI.Zachary Daus - 2025 - Philosophy and Technology 38 (1):1-17.
    Medical artificial intelligence (AI) systems are value-laden technologies that can simultaneously encourage and discourage conflicting values that may all be relevant for the pursuit of justice. I argue that the predominant theory of healthcare justice, the Rawls-inspired approach of Norman Daniels, neither adequately acknowledges such conflicts nor explains if and how they can resolved. By juxtaposing Daniels’s theory of healthcare justice with Axel Honneth’s and Nancy Fraser’s respective theories of justice, I draw attention to one such conflict. Medical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10. The Ethics of Medical AI and the Physician-Patient Relationship.Sally Dalton-Brown - 2020 - Cambridge Quarterly of Healthcare Ethics 29 (1):115-121.
    :This article considers recent ethical topics relating to medical AI. After a general discussion of recent medical AI innovations, and a more analytic look at related ethical issues such as data privacy, physician dependency on poorly understood AI helpware, bias in data used to create algorithms post-GDPR, and changes to the patient–physician relationship, the article examines the issue of so-called robot doctors. Whereas the so-called democratization of healthcare due to health wearables and increased access to medical information (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  11. Medical AI, Inductive Risk, and the Communication of Uncertainty: The Case of Disorders of Consciousness.Jonathan Birch - forthcoming - Journal of Medical Ethics.
    Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is “cognitive-motor dissociation” (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram (EEG) recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient’s family, because this information may confuse, alarm and mislead. Instead, we need a procedure (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  12. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   63 citations  
  13.  27
    Negotiating cultural sensitivity in medical AI.Ji-Young Lee - 2024 - Journal of Medical Ethics 50 (9):602-603.
    Ugar and Malele write that generic machine learning (ML) technologies for mental health diagnosis would be challenging to implement in sub-Saharan Africa due to cultural specificities in how those conditions are diagnosed. For example, they say that in South Africa, the appearance of ‘schizophrenia’ might be understood as a type of spiritual possession, rather than a mental disorder caused by a brain dysfunction. Hence, a generic ML system is likely to ‘misdiagnose’ persons whose symptomatology matches that of schizophrenia in the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  15.  27
    Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it.Alice Liefgreen, Netta Weinstein, Sandra Wachter & Brent Mittelstadt - 2024 - AI and Society 39 (5):2183-2199.
    Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16.  92
    Trust does not need to be human: it is possible to trust medical AI.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2021 - Journal of Medical Ethics 47 (6):437-438.
    In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  17.  47
    Embedded ethics: a proposal for integrating ethics into the development of medical AI.Alena Buyx, Sami Haddadin, Ruth Müller, Daniel Tigard, Amelia Fiske & Stuart McLennan - 2022 - BMC Medical Ethics 23 (1):1-10.
    The emergence of ethical concerns surrounding artificial intelligence (AI) has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  18.  93
    Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI.Ramón Alvarado - 2021 - Bioethics 36 (2):121-133.
    Bioethics, Volume 36, Issue 2, Page 121-133, February 2022.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  19.  39
    Ethics of Medical AI.Giovanni Rubeis - 2024 - Springer Verlag.
    This is the first book to provide a coherent overview over the ethical implications of AI-related technologies in medicine. It explores how these technologies transform practices, relationships, and environments in the clinical field. It provides an introduction into ethical issues such as data security and privacy protection, bias and algorithmic fairness, trust and transparency, challenges to the doctor-patient relationship, and new perspectives for informed consent. The book focuses on the transformative impact that technology is having on medicine, and discusses several (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  20.  5
    Encompassing trust in medical AI from the perspective of medical students: a quantitative comparative study.Anamaria Malešević, Mária Kolesárová & Anto Čartolovni - 2024 - BMC Medical Ethics 25 (1):1-11.
    In the years to come, artificial intelligence will become an indispensable tool in medical practice. The digital transformation will undoubtedly affect today’s medical students. This study focuses on trust from the perspective of three groups of medical students - students from Croatia, students from Slovakia, and international students studying in Slovakia. A paper-pen survey was conducted using a non-probabilistic convenience sample. In the second half of 2022, 1715 students were surveyed at five faculties in Croatia and three (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21. Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans.Jakob Mainz, Jens Christian Bjerring & Lauritz Munch - 2023 - Acm Proceedings of Fairness, Accountability, and Transaparency (Facct) 2023 1 (1):44-49.
    This paper concerns the double standard debate in the ethics of AI literature. This debate essentially revolves around the question of whether we should subject AI systems to different normative standards than humans. So far, the debate has centered around the desideratum of transparency. That is, the debate has focused on whether AI systems must be more transparent than humans in their decision-making processes in order for it to be morally permissible to use such systems. Some have argued that the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  22.  62
    Relative explainability and double standards in medical decision-making: Should medical AI be subjected to higher standards in medical decision-making than doctors?Saskia K. Nagel, Jan-Christoph Heilinger & Hendrik Kempt - 2022 - Ethics and Information Technology 24 (2):20.
    The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  23.  32
    Ethical and legal challenges of medical AI on informed consent: China as an example.Yue Wang & Zhuo Ma - 2025 - Developing World Bioethics 25 (1):46-54.
    The escalating integration of Artificial Intelligence (AI) in clinical settings carries profound implications for the doctrine of informed consent, presenting challenges that necessitate immediate attention. China, in its advancement in the deployment of medical AI, is proactively engaging in the formulation of legal and ethical regulations. This paper takes China as an example to undertake a theoretical examination rooted in the principles of medical ethics and legal norms, analyzing informed consent and medical AI through relevant literature data. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  55
    Responsibility beyond design: Physicians’ requirements for ethical medical AI.Martin Sand, Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Bioethics 36 (2):162-169.
    Bioethics, Volume 36, Issue 2, Page 162-169, February 2022.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  25.  16
    Governance of Medical AI.Calvin W. L. Ho & Karel Caals - 2024 - Asian Bioethics Review 16 (3):303-305.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  45
    Before and beyond trust: reliance in medical AI.Charalampia Kerasidou, Angeliki Kerasidou, Monika Buscher & Stephen Wilkinson - 2021 - Journal of Medical Ethics 48 (11):852-856.
    Artificial intelligence is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  27.  48
    Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Computer knows best? The need for value-flexibility in medical AI.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (3):156-160.
    Artificial intelligence (AI) is increasingly being developed for use in medicine, including for diagnosis and in treatment decision making. The use of AI in medical treatment raises many ethical issues that are yet to be explored in depth by bioethicists. In this paper, I focus specifically on the relationship between the ethical ideal of shared decision making and AI systems that generate treatment recommendations, using the example of IBM’s Watson for Oncology. I argue that use of this type of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   60 citations  
  29. A Tale of Two Deficits: Causality and Care in Medical AI.Melvin Chen - 2020 - Philosophy and Technology 33 (2):245-267.
    In this paper, two central questions will be addressed: ought we to implement medical AI technology in the medical domain? If yes, how ought we to implement this technology? I will critically engage with three options that exist with respect to these central questions: the Neo-Luddite option, the Assistive option, and the Substitutive option. I will first address key objections on behalf of the Neo-Luddite option: the Objection from Bias, the Objection from Artificial Autonomy, the Objection from Status (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  80
    Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  31.  2
    Bring a ‘Patient’s Medical AI Journey’ to the Hill.Ian Stevens, Erin Williams, Jean-Christophe Bélisle-Pipon & Vardit Ravitsky - 2025 - American Journal of Bioethics 25 (3):132-135.
    Volume 25, Issue 3, March 2025, Page 132-135.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Should we be afraid of medical AI?Ezio Di Nucci - 2019 - Journal of Medical Ethics 45 (8):556-558.
    I analyse an argument according to which medical artificial intelligence represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: it confuses AI with machine learning; it misses machine learning’s potential for personalised medicine through big data; it fails to distinguish (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  33.  74
    Generative AI, Specific Moral Values: A Closer Look at ChatGPT’s New Ethical Implications for Medical AI.Gavin Victor, Jean-Christophe Bélisle-Pipon & Vardit Ravitsky - 2023 - American Journal of Bioethics 23 (10):65-68.
    Cohen’s (2023) mapping exercise of possible bioethical issues emerging from the use of ChatGPT in medicine provides an informative, useful, and thought-provoking trigger for discussions of AI ethic...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  34. Equity, autonomy, and the ethical risks and opportunities of generalist medical AI.Reuben Sass - 2023 - AI and Ethics:1-11.
    This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and (...)
     
    Export citation  
     
    Bookmark  
  35.  19
    Secondary Use of Health Data for Medical AI: A Cross-Regional Examination of Taiwan and the EU.Chih-Hsing Ho - 2024 - Asian Bioethics Review 16 (3):407-422.
    This paper conducts a comparative analysis of data governance mechanisms concerning the secondary use of health data in Taiwan and the European Union (EU). Both regions have adopted distinctive approaches and regulations for utilizing health data beyond primary care, encompassing areas such as medical research and healthcare system enhancement. Through an examination of these models, this study seeks to elucidate the strategies, frameworks, and legal structures employed by Taiwan and the EU to strike a delicate balance between the imperative (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  17
    Crossing the Trust Gap in Medical AI: Building an Abductive Bridge for xAI.Steven S. Gouveia & Jaroslav Malík - 2024 - Philosophy and Technology 37 (3):1-25.
    In this paper, we argue that one way to approach what is known in the literature as the “Trust Gap” in Medical AI is to focus on explanations from an Explainable AI (xAI) perspective. Against the current framework on xAI – which does not offer a real solution – we argue for a pragmatist turn, one that focuses on understanding how we provide explanations in Traditional Medicine (TM), composed by human agents only. Following this, explanations have two specific relevant (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37.  61
    Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  38.  41
    Who's next? Shifting balances between medical AI, physicians and patients in shaping the future of medicine.Nils-Frederic Wagner, Mita Banerjee & Norbert W. Paul - 2022 - Bioethics 36 (2):111-112.
    Bioethics, Volume 36, Issue 2, Page 111-112, February 2022.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  39.  9
    Existing and Emerging Capabilities in the Governance of Medical AI.Gilberto K. K. Leung, Yuechan Song & Calvin W. L. Ho - 2024 - Asian Bioethics Review 16 (3):307-311.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40.  48
    Handle with care: Assessing performance measures of medical AI for shared clinical decision‐making.Sune Holm - 2021 - Bioethics 36 (2):178-186.
    In this article I consider two pertinent questions that practitioners must consider when they deploy an algorithmic system as support in clinical shared decision‐making. The first question concerns how to interpret and assess the significance of different performance measures for clinical decision‐making. The second question concerns the professional obligations that practitioners have to communicate information about the quality of an algorithm's output to patients in light of the principles of autonomy, beneficence, and justice. In the article I review the four (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  41.  7
    Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI.Caspar Barnes Mateo Riobo Aboy Timo Minssen Jemima Winifred Allen Brian D. Earp Julian Savulescu Sebastian Porsdam Mann A. Harvard Medical Schoolb AminoChain - forthcoming - American Journal of Bioethics:1-16.
    Participation in research is supposed to be voluntary and informed. Yet it is difficult to ensure people are adequately informed about the potential uses of their biological materials when they donate samples for future research. We propose a novel consent framework which we call “demonstrated consent” that leverages blockchain technology and generative AI to address this problem. In a demonstrated consent model, each donated sample is associated with a unique non-fungible token (NFT) on a blockchain, which records in its metadata (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  3
    Correction to: Secondary Use of Health Data for Medical AI: A Cross‑Regional Examination of Taiwan and the EU.Chih‑Hsing Ho - forthcoming - Asian Bioethics Review:1-2.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  22
    Correction to: A Tale of Two Deficits: Causality and Care in Medical AI.Melvin Chen - 2019 - Philosophy and Technology 32 (4):769-770.
    The original version of this article unfortunately contains an unconverted data in footnotes 5, 9 and 13.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44.  60
    No we shouldn’t be afraid of medical AI; it involves risks and opportunities.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (8):559-559.
    In contrast to Di Nucci’s characterisation, my argument is not a technoapocalyptic one. The view I put forward is that systems like IBM’s Watson for Oncology create both risks and opportunities from the perspective of shared decision-making. In this response, I address the issues that Di Nucci raises and highlight the importance of bioethicists engaging critically with these developing technologies.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  45.  10
    What Are Patients Doing in the Loop? Patients as Fellow-Workers in the Everyday Use of Medical AI.Markus Herrmann - 2024 - American Journal of Bioethics 24 (9):91-93.
    In their article “What are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids,” Salloch and Eriksen (2024) propose involving patients as...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  22
    Take five? A coherentist argument why medical AI does not require a new ethical principle.Seppe Segers & Michiel De Proost - 2024 - Theoretical Medicine and Bioethics 45 (5):387-400.
    With the growing application of machine learning models in medicine, principlist bioethics has been put forward as needing revision. This paper reflects on the dominant trope in AI ethics to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of principlism. It specifically suggests that these four principles are sufficient and challenges the relevance of explicability as a separate ethical principle by emphasizing the coherentist affinity of principlism. We argue that, through (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47.  9
    Scapegoat-in-the-Loop? Human Control over Medical AI and the (Mis)Attribution of Responsibility.Robert Ranisch - 2024 - American Journal of Bioethics 24 (9):116-117.
    The paper by Salloch and Eriksen (2024) offers an insightful contribution to the ethical debate on Machine Learning-driven Clinical Decision Support Systems (ML_CDSS) and provides much-needed conce...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  12
    Regulating AI-Based Medical Devices in Saudi Arabia: New Legal Paradigms in an Evolving Global Legal Order.Barry Solaiman - 2024 - Asian Bioethics Review 16 (3):373-389.
    This paper examines the Saudi Food and Drug Authority’s (SFDA) Guidance on Artificial Intelligence (AI) and Machine Learning (ML) technologies based Medical Devices (the MDS-G010). The SFDA has pioneered binding requirements designed for manufacturers to obtain Medical Device Marketing Authorization. The regulation of AI in health is at an early stage worldwide. Therefore, it is critical to examine the scope and nature of the MDS-G010, its influences, and its future directions. It is argued that the guidance is a (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  24
    Clinical internship environment and caring behaviours among nursing students: A moderated mediation model.Zhuo-er Huang, Xing Qiu, Ya-Qian Fu, Ai-di Zhang, Hui Huang, Jia Liu, Jin Yan & Qi-Feng Yi - 2024 - Nursing Ethics 31 (8):1481-1498.
    Background Caring behaviour is critical for nursing quality, and the clinical internship environment is a crucial setting for preparing nursing students for caring behaviours. Evidence about how to develop nursing students’ caring behaviour in the clinical environment is still emerging. However, the mechanism between the clinical internship environment and caring behaviour remains unclear, especially the mediating role of moral sensitivity and the moderating effect of self-efficacy. Research objective This study aimed to examine the mediating effect of moral sensitivity and the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50.  74
    Generative AI and medical ethics: the state of play.Hazem Zohny, Sebastian Porsdam Mann, Brian D. Earp & John McMillan - 2024 - Journal of Medical Ethics 50 (2):75-76.
    Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. 1 Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
1 — 50 / 973