Results for 'trust in AI'

975 found
Order:
  1. In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  2. In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   68 citations  
  3.  80
    The problem with trust: on the discursive commodification of trust in AI.Steffen Krüger & Christopher Wilson - forthcoming - AI and Society:1-9.
    This commentary draws critical attention to the ongoing commodification of trust in policy and scholarly discourses of artificial intelligence (AI) and society. Based on an assessment of publications discussing the implementation of AI in governmental and private services, our findings indicate that this discursive trend towards commodification is driven by the need for a trusting population of service users to harvest data at scale and leads to the discursive construction of trust as an essential good on a par (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  4. In AI we trust? Perceptions about automated decision-making by artificial intelligence.Theo Araujo, Natali Helberger, Sanne Kruikemeier & Claes H. de Vreese - 2020 - AI and Society 35 (3):611-623.
    Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   51 citations  
  5.  23
    Twenty-four years of empirical research on trust in AI: a bibliometric review of trends, overlooked issues, and future directions.Michaela Benk, Sophie Kerstan, Florian von Wangenheim & Andrea Ferrario - forthcoming - AI and Society:1-24.
    Trust is widely regarded as a critical component to building artificial intelligence (AI) systems that people will use and safely rely upon. As research in this area continues to evolve, it becomes imperative that the research community synchronizes its empirical efforts and aligns on the path toward effective knowledge creation. To lay the groundwork toward achieving this objective, we performed a comprehensive bibliometric analysis, supplemented with a qualitative content analysis of over two decades of empirical research measuring trust (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6. Trust and ethics in AI.Hyesun Choung, Prabu David & Arun Ross - 2023 - AI and Society 38 (2):733-745.
    With the growing influence of artificial intelligence (AI) in our lives, the ethical implications of AI have received attention from various communities. Building on previous work on trust in people and technology, we advance a multidimensional, multilevel conceptualization of trust in AI and examine the relationship between trust and ethics using the data from a survey of a national sample in the U.S. This paper offers two key dimensions of trust in AI—human-like trust and functionality (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  7. Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science.Uwe Peters - forthcoming - Social Epistemology Review and Reply Collective.
    Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  8. Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  9. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  10.  36
    Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making.Oskar Wysocki, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, Dónal Landers, Rebecca Lee & André Freitas - 2023 - Artificial Intelligence 316 (C):103839.
  11.  7
    Encompassing trust in medical AI from the perspective of medical students: a quantitative comparative study.Anamaria Malešević, Mária Kolesárová & Anto Čartolovni - 2024 - BMC Medical Ethics 25 (1):1-11.
    In the years to come, artificial intelligence will become an indispensable tool in medical practice. The digital transformation will undoubtedly affect today’s medical students. This study focuses on trust from the perspective of three groups of medical students - students from Croatia, students from Slovakia, and international students studying in Slovakia. A paper-pen survey was conducted using a non-probabilistic convenience sample. In the second half of 2022, 1715 students were surveyed at five faculties in Croatia and three in Slovakia. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12.  49
    Trust and Trustworthiness in AI.Juan Manuel Durán & Giorgia Pozzi - 2025 - Philosophy and Technology 38 (1):1-31.
    Achieving trustworthy AI is increasingly considered an essential desideratum to integrate AI systems into sensitive societal fields, such as criminal justice, finance, medicine, and healthcare, among others. For this reason, it is important to spell out clearly its characteristics, merits, and shortcomings. This article is the first survey in the specialized literature that maps out the philosophical landscape surrounding trust and trustworthiness in AI. To achieve our goals, we proceed as follows. We start by discussing philosophical positions on (...) and trustworthiness, focusing on interpersonal accounts of trust. This allows us to explain why trust, in its most general terms, is to be understood as reliance plus some “extra factor”. We then turn to the first part of the definition provided, i.e., reliance, and analyze two opposing approaches to establishing AI systems’ reliability. On the one hand, we consider transparency and, on the other, computational reliabilism. Subsequently, we focus on debates revolving around the “extra factor”. To this end, we consider viewpoints that most actively resist the possibility and desirability of trusting AI systems before turning to the analysis of the most prominent advocates of it. Finally, we take up the main conclusions of the previous sections and briefly point at issues that remain open and need further attention. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13.  3
    The Turing Test and the Issue of Trust in AI Systems.Paweł Stacewicz & Krzysztof Sołoducha - 2024 - Studies in Logic, Grammar and Rhetoric 69 (1):353-364.
    The Turing test, which is a verbal test of the indistinguishability of machine and human intelligence, is a historically important idea that has set a way of thinking about the AI (artificial intelligence) project that is still relevant today. According to it, the benchmark/blueprint for AI is human intelligence, and the key skill of AI should be its communicative proficiency – which includes explaining decisions made by the machine. Passing the original Turing test by a machine does not guarantee that (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  2
    Balancing Transparency and Trust: Reevaluating AI Disclosure in Healthcare.Michael Pflanzer - 2025 - American Journal of Bioethics 25 (3):153-156.
    Volume 25, Issue 3, March 2025, Page 153-156.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   64 citations  
  16. Anthropomorphism in AI: Hype and Fallacy.Adriana Placani - 2024 - AI and Ethics.
    This essay focuses on anthropomorphism as both a form of hype and fallacy. As a form of hype, anthropomorphism is shown to exaggerate AI capabilities and performance by attributing human-like traits to systems that do not possess them. As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust. By focusing on these two dimensions of anthropomorphism in AI, the essay highlights (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  17.  84
    Explanations in AI as Claims of Tacit Knowledge.Nardi Lam - 2022 - Minds and Machines 32 (1):135-158.
    As AI systems become increasingly complex it may become unclear, even to the designer of a system, why exactly a system does what it does. This leads to a lack of trust in AI systems. To solve this, the field of explainable AI has been working on ways to produce explanations of these systems’ behaviors. Many methods in explainable AI, such as LIME, offer only a statistical argument for the validity of their explanations. However, some methods instead study the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  26
    Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms.Dominic DiFranzo, Natalya N. Bazarova, Aparajita Bhandari & Marie Ozanne - 2022 - Big Data and Society 9 (2).
    This study examines how visibility of a content moderator and ambiguity of moderated content influence perception of the moderation system in a social media environment. In the course of a two-day pre-registered experiment conducted in a realistic social media simulation, participants encountered moderated comments that were either unequivocally harsh or ambiguously worded, and the source of moderation was either unidentified, or attributed to other users or an automated system (AI). The results show that when comments were moderated by an AI (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  93
    Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI.Ramón Alvarado - 2021 - Bioethics 36 (2):121-133.
    Bioethics, Volume 36, Issue 2, Page 121-133, February 2022.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  20.  58
    Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions.Sebastian Krügel, Andreas Ostermaier & Matthias Uhl - 2022 - Philosophy and Technology 35 (1):1-37.
    Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  21.  89
    Trust, Explainability and AI.Sam Baron - 2025 - Philosophy and Technology 38 (4):1-23.
    There has been a surge of interest in explainable artificial intelligence (XAI). It is commonly claimed that explainability is necessary for trust in AI, and that this is why we need it. In this paper, I argue that for some notions of trust it is plausible that explainability is indeed a necessary condition. But that these kinds of trust are not appropriate for AI. For notions of trust that are appropriate for AI, explainability is not a (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  93
    Trust does not need to be human: it is possible to trust medical AI.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2021 - Journal of Medical Ethics 47 (6):437-438.
    In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  23.  67
    The curious case of “trust” in the light of changing doctor–patient relationships.Seppe Segers & Heidi Mertes - 2022 - Bioethics 36 (8):849-857.
    The centrality of trust in traditional doctor–patient relationships has been criticized as inordinately paternalistic, yet in today's discussions about medical ethics—mostly in response to disruptive innovation in healthcare—trust reappears as an asset to enable empowerment. To turn away from paternalistic trust‐based doctor–patient relationships and to arrive at an empowerment‐based medical model, increasing reference is made to the importance of nurturing trust in technologies that are supposed to bring that empowerment. In this article we stimulate discussion about (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  24. Trust in Intrusion Detection Systems: An Investigation of Performance Analysis for Machine Learning and Deep Learning Models.Basim Mahbooba, Radhya Sahal, Martin Serrano & Wael Alosaimi - 2021 - Complexity 2021:1-23.
    To design and develop AI-based cybersecurity systems ), users can justifiably trust, one needs to evaluate the impact of trust using machine learning and deep learning technologies. To guide the design and implementation of trusted AI-based systems in IDS, this paper provides a comparison among machine learning and deep learning models to investigate the trust impact based on the accuracy of the trusted AI-based systems regarding the malicious data in IDs. The four machine learning techniques are decision (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  67
    Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik van den Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor–patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot (...) AI; and (2) that it is also dangerous, that is, that we should not trust AI—particularly if the stakes are as high as they routinely are in medicine. In this paper, we aim to defend a notion of trust in the context of medical AI against both charges. To do so, we highlight the technically mediated intentions manifest in AI systems, rendering trust a conceptually plausible stance for dealing with them. Based on literature from human–robot interactions, psychology and sociology, we then propose a novel model to analyse notions of trust, distinguishing between three aspects: reliability, competence, and intentions. We discuss each aspect and make suggestions regarding how medical AI may become worthy of our trust. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  26. Computer-mediated trust in self-interested expert recommendations.Jonathan Ben-Naim, Jean-François Bonnefon, Andreas Herzig, Sylvie Leblois & Emiliano Lorini - 2010 - AI and Society 25 (4):413-422.
    Important decisions are often based on a distributed process of information processing, from a knowledge base that is itself distributed among agents. The simplest such situation is that where a decision-maker seeks the recommendations of experts. Because experts may have vested interests in the consequences of their recommendations, decision-makers usually seek the advice of experts they trust. Trust, however, is a commodity that is usually built through repeated face time and social interaction and thus cannot easily be built (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  32
    AI-Inclusivity in Healthcare: Motivating an Institutional Epistemic Trust Perspective.Kritika Maheshwari, Christoph Jedan, Imke Christiaans, Mariëlle van Gijn, Els Maeckelberghe & Mirjam Plantinga - 2024 - Cambridge Quarterly of Healthcare Ethics:1-15.
    This paper motivates institutional epistemic trust as an important ethical consideration informing the responsible development and implementation of artificial intelligence (AI) technologies (or AI-inclusivity) in healthcare. Drawing on recent literature on epistemic trust and public trust in science, we start by examining the conditions under which we can have institutional epistemic trust in AI-inclusive healthcare systems and their members as providers of medical information and advice. In particular, we discuss that institutional epistemic trust in AI-inclusive (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  32
    Nowotny, Helga (2021). In AI we trust: power, illusion and control of predictive algorithms, Polity, Cambridge, UK, ISBN-13: 978-1509548811. [REVIEW]Karamjit S. Gill - 2022 - AI and Society 37 (1):411-414.
  29. Keep trusting! A plea for the notion of Trustworthy AI.Giacomo Zanotti, Mattia Petrolo, Daniele Chiffi & Viola Schiaffonati - 2024 - AI and Society 39 (6):2691-2702.
    A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  30. (E)‐Trust and Its Function: Why We Shouldn't Apply Trust and Trustworthiness to Human–AI Relations.Pepijn Al - 2023 - Journal of Applied Philosophy 40 (1):95-108.
    With an increasing use of artificial intelligence (AI) systems, theorists have analyzed and argued for the promotion of trust in AI and trustworthy AI. Critics have objected that AI does not have the characteristics to be an appropriate subject for trust. However, this argumentation is open to counterarguments. Firstly, rejecting trust in AI denies the trust attitudes that some people experience. Secondly, we can trust other non‐human entities, such as animals and institutions, so why can (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  31.  6
    Being Pragmatic About Reliance and Trust in Artificial Intelligence.Andrea Ferrario - manuscript
    The ongoing debate about reliance and trust in artificial intelligence (AI) systems continues to challenge our understanding and application of these concepts in human-AI interactions. In this work, we argue for a pragmatic approach to defining reliance and trust in AI. Our approach is grounded in three expectations that should guide human-AI interactions: appropriate reliance, efficiency, and motivation by objective reasons. By focusing on these expectations, we show that it is possible to reconcile reliance with trust in (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  32.  42
    (1 other version)Philosophical evaluation of the conceptualisation of trust in the NHS’ Code of Conduct for artificial intelligence-driven technology.Soogeun Samuel Lee - 2022 - Journal of Medical Ethics 48 (4):272-277.
    The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence -driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  33.  26
    How Transparency Modulates Trust in Artificial Intelligence.John Zerilli, Umang Bhatt & Adrian Weller - 2022 - Patterns 3 (4):1-10.
    We review the literature on how perceiving an AI making mistakes violates trust and how such violations might be repaired. In doing so, we discuss the role played by various forms of algorithmic transparency in the process of trust repair, including explanations of algorithms, uncertainty estimates, and performance metrics.
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  34.  61
    Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  35.  45
    Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  45
    Before and beyond trust: reliance in medical AI.Charalampia Kerasidou, Angeliki Kerasidou, Monika Buscher & Stephen Wilkinson - 2021 - Journal of Medical Ethics 48 (11):852-856.
    Artificial intelligence is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  37.  43
    Designing trust in the Internet services.Irina P. Kuzheleva-Sagan & Natalya A. Suchkova - 2016 - AI and Society 31 (3):381-392.
  38.  2
    The need for epistemic humility in AI-assisted pain assessment.Rachel A. Katz, S. Scott Graham & Daniel Z. Buchman - forthcoming - Medicine, Health Care and Philosophy:1-11.
    It has been difficult historically for physicians, patients, and philosophers alike to quantify pain given that pain is commonly understood as an individual and subjective experience. The process of measuring and diagnosing pain is often a fraught and complicated process. New developments in diagnostic technologies assisted by artificial intelligence promise more accurate and efficient diagnosis for patients, but these tools are known to reproduce and further entrench existing issues within the healthcare system, such as poor patient treatment and the replication (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Rebooting Ai: Building Artificial Intelligence We Can Trust.Gary Marcus & Ernest Davis - 2019 - Vintage.
    Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificial intelligence. Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   15 citations  
  40. Ethical Perceptions of AI in Hiring and Organizational Trust: The Role of Performance Expectancy and Social Influence.Maria Figueroa-Armijos, Brent B. Clark & Serge P. da Motta Veiga - 2023 - Journal of Business Ethics 186 (1):179-197.
    The use of artificial intelligence (AI) in hiring entails vast ethical challenges. As such, using an ethical lens to study this phenomenon is to better understand whether and how AI matters in hiring. In this paper, we examine whether ethical perceptions of using AI in the hiring process influence individuals’ trust in the organizations that use it. Building on the organizational trust model and the unified theory of acceptance and use of technology, we explore whether ethical perceptions are (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  41.  79
    Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  42.  77
    Application of artificial intelligence: risk perception and trust in the work context with different impact levels and task types.Uwe Klein, Jana Depping, Laura Wohlfahrt & Pantaleon Fassbender - 2024 - AI and Society 39 (5):2445-2456.
    Following the studies of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018), this empirical study uses two scenario-based online experiments. The sample consists of 221 subjects from Germany, differing in both age and gender. The original studies are not replicated one-to-one. New scenarios are constructed as realistically as possible and focused on everyday work situations. They are based on the AI acceptance model of Scheuer (Grundlagen intelligenter KI-Assistenten und deren vertrauensvolle Nutzung. Springer, Wiesbaden, 2020) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  43.  12
    Toward an empathy-based trust in human-otheroid relations.Abootaleb Safdari - forthcoming - AI and Society:1-16.
    The primary aim of this paper is twofold: firstly, to argue that we can enter into relation of trust with robots and AI systems (automata); and secondly, to provide a comprehensive description of the underlying mechanisms responsible for this relation of trust. To achieve these objectives, the paper first undertakes a critical examination of the main arguments opposing the concept of a trust-based relation with automata. Showing that these arguments face significant challenges that render them untenable, it (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44.  23
    Crossing the Trust Gap in Medical AI: Building an Abductive Bridge for xAI.Steven S. Gouveia & Jaroslav Malík - 2024 - Philosophy and Technology 37 (3):1-25.
    In this paper, we argue that one way to approach what is known in the literature as the “Trust Gap” in Medical AI is to focus on explanations from an Explainable AI (xAI) perspective. Against the current framework on xAI – which does not offer a real solution – we argue for a pragmatist turn, one that focuses on understanding how we provide explanations in Traditional Medicine (TM), composed by human agents only. Following this, explanations have two specific relevant (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45. Essay in honor of Rino Falcone: Trust in the Rational and Social Bands of Cognitive Architectures.Antonio Lieto - 2025 - Essays in Honor of Rino Falcone.
    In this short chapter I propose some possible research directions ad- dressing the problem of how cognitive, formal and computational models of trust (i.e. one of the main areas of Rino Falcone’s contribu- tions) can play a major role in the development of cognitive modeling and AI research community in the context of what Allen Newell called the rational and social bands.
    Direct download  
     
    Export citation  
     
    Bookmark  
  46. Explanation and trust: what to tell the user in security and AI? [REVIEW]Wolter Pieters - 2011 - Ethics and Information Technology 13 (1):53-64.
    There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, an important goal of explanation is to acquire or maintain the users’ trust. In this paper, I investigate the relation between explanation and trust in the context of (...)
    Direct download (13 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  47. Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  48.  80
    In Algorithms We Trust: Magical Thinking, Superintelligent Ai and Quantum Computing.Nathan Schradle - 2020 - Zygon 55 (3):733-747.
    This article analyzes current attitudes toward artificial intelligence (AI) and quantum computing and argues that they represent a modern‐day form of magical thinking. It proposes that AI and quantum computing are thus excellent examples of the ways that traditional distinctions between religion, science, and magic fail to account for the vibrancy and energy that surround modern technologies.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  11
    Moving beyond Technical Issues to Stakeholder Involvement: Key Areas for Consideration in the Development of Human-Centred and Trusted AI in Healthcare.Jane Kaye, Nisha Shah, Atsushi Kogetsu, Sarah Coy, Amelia Katirai, Machie Kuroda, Yan Li, Kazuto Kato & Beverley Anne Yamamoto - 2024 - Asian Bioethics Review 16 (3):501-511.
    Discussion around the increasing use of AI in healthcare tends to focus on the technical aspects of the technology rather than the socio-technical issues associated with implementation. In this paper, we argue for the development of a sustained societal dialogue between stakeholders around the use of AI in healthcare. We contend that a more human-centred approach to AI implementation in healthcare is needed which is inclusive of the views of a range of stakeholders. We identify four key areas to support (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  49
    AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare.Laura Arbelaez Ossa, Stephen R. Milford, Michael Rost, Anja K. Leist, David M. Shaw & Bernice S. Elger - 2024 - Science and Engineering Ethics 30 (3):1-21.
    While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI’s beneficial outputs and concerns about the challenges of human–computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 975