Results for 'Explainable'

970 found
Order:
  1. Michael rutter.Interplay Explained - 2008 - Contemporary Issues in Bioethics 405 (6788).
    No categories
     
    Export citation  
     
    Bookmark  
  2. Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  3. The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  4. Modality is Not Explainable by Essence.Carlos Romero - 2019 - Philosophical Quarterly 69 (274):121-141.
    Some metaphysicians believe that metaphysical modality is explainable by the essences of objects. In §II, I spell out the definitional view of essence, and in §III, a working notion of metaphysical explanation. Then, in §IV, I consider and reject five natural ways to explain necessity by essence: in terms of the principle that essential properties can't change, in terms of the supposed obviousness of the necessity of essential truth, in terms of the logical necessity of definitions, in terms of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  5. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   66 citations  
  6. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  7.  93
    Scalable and explainable legal prediction.L. Karl Branting, Craig Pfeifer, Bradford Brown, Lisa Ferro, John Aberdeen, Brandy Weiss, Mark Pfaff & Bill Liao - 2020 - Artificial Intelligence and Law 29 (2):213-238.
    Legal decision-support systems have the potential to improve access to justice, administrative efficiency, and judicial consistency, but broad adoption of such systems is contingent on development of technologies with low knowledge-engineering, validation, and maintenance costs. This paper describes two approaches to an important form of legal decision support—explainable outcome prediction—that obviate both annotation of an entire decision corpus and manual processing of new cases. The first approach, which uses an attention network for prediction and attention weights to highlight salient (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  8.  39
    Evidential Pluralism and Explainable AI.Jon Williamson - unknown
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Our Reliability is in Principle Explainable.Dan Baras - 2017 - Episteme 14 (2):197-211.
    Non-skeptical robust realists about normativity, mathematics, or any other domain of non- causal truths are committed to a correlation between their beliefs and non- causal, mind-independent facts. Hartry Field and others have argued that if realists cannot explain this striking correlation, that is a strong reason to reject their theory. Some consider this argument, known as the Benacerraf–Field argument, as the strongest challenge to robust realism about mathematics, normativity, and even logic. In this article I offer two closely related accounts (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  10.  11
    I n his first-century BCE work De Natura Deorum the Roman philosopher Cicero recounts the explanation offered by Epicurus for the fact that 'nature has imprinted an idea of [the gods] in the minds of all mankind'. His explanation was one that was at one level 'naturalistic'and at another level 'theological'. He described it this way. [REVIEW]Explaining Away - 2009 - In Jeffrey Schloss & Michael J. Murray (eds.), The believing primate: scientific, philosophical, and theological reflections on the origin of religion. Oxford: Oxford University Press. pp. 179.
  11. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  12.  77
    Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  13. E. Higher., Order Thought and Representationalism.Explaining Consciousness - 2002 - In David John Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University Press USA. pp. 406.
  14.  87
    Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  15. (1 other version)Recent issues have included.Explaining Action, David S. Shwayder, Charles Taylor, David Rayficld, Colin Radford, Joseph Margolis, Arthur C. Danto, James Cargile, K. Robert & B. May - forthcoming - Foundations of Language.
     
    Export citation  
     
    Bookmark  
  16.  21
    Is AI case that is explainable, intelligible or hopeless?Łukasz Mścisławski - 2022 - Zagadnienia Filozoficzne W Nauce 73:357-369.
    Wrocław University of Science and Technology, Poland This article is a review of the book _Making AI Intelligible. Philosophical Foundations_, written by Herman Cappelen and Josh Dever, and published in 2021 by Oxford University Press. The authors of the reviewed book address the difficult issue of interpreting the results provided by AI systems and the links between human-specific content handling and the internal mechanisms of these systems. Considering the potential usefulness of various frameworks developed in philosophy to solve the problem, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  17.  26
    Special issue on Explainable Artificial Intelligence.Tim Miller, Robert Hoffman, Ofra Amir & Andreas Holzinger - 2022 - Artificial Intelligence 307 (C):103705.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  20.  47
    Conceptualizing understanding in explainable artificial intelligence (XAI): an abilities-based approach.Timo Speith, Barnaby Crook, Sara Mann, Astrid Schomäcker & Markus Langer - 2024 - Ethics and Information Technology 26 (2):1-15.
    A central goal of research in explainable artificial intelligence (XAI) is to facilitate human understanding. However, understanding is an elusive concept that is difficult to target. In this paper, we argue that a useful way to conceptualize understanding within the realm of XAI is via certain human abilities. We present four criteria for a useful conceptualization of understanding in XAI and show that these are fulfilled by an abilities-based approach: First, thinking about understanding in terms of specific abilities is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  40
    Creating meaningful work in the age of AI: explainable AI, explainability, and why it matters to organizational designers.Kristin Wulff & Hanne Finnestrand - forthcoming - AI and Society:1-14.
    In this paper, we contribute to research on enterprise artificial intelligence (AI), specifically to organizations improving the customer experiences and their internal processes through using the type of AI called machine learning (ML). Many organizations are struggling to get enough value from their AI efforts, and part of this is related to the area of explainability. The need for explainability is especially high in what is called black-box ML models, where decisions are made without anyone understanding how an AI reached (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  22.  34
    An explanation space to align user studies with the technical development of Explainable AI.Garrick Cabour, Andrés Morales-Forero, Élise Ledoux & Samuel Bassetto - 2023 - AI and Society 38 (2):869-887.
    Providing meaningful and actionable explanations for end-users is a situated problem requiring the intersection of multiple disciplines to address social, operational, and technical challenges. However, the explainable artificial intelligence community has not commonly adopted or created tangible design tools that allow interdisciplinary work to develop reliable AI-powered solutions. This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users’ mental (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23. A Means-End Account of Explainable Artificial Intelligence.Oliver Buchholz - 2023 - Synthese 202 (33):1-23.
    Explainable artificial intelligence (XAI) seeks to produce explanations for those machine learning methods which are deemed opaque. However, there is considerable disagreement about what this means and how to achieve it. Authors disagree on what should be explained (topic), to whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal). In this paper, I employ insights from means-end epistemology to structure the field. According to means-end epistemology, different means ought to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  24.  18
    Consumers are willing to pay a price for explainable, but not for green AI. Evidence from a choice-based conjoint analysis.Markus B. Siewert, Stefan Wurster & Pascal D. König - 2022 - Big Data and Society 9 (1).
    A major challenge with the increasing use of Artificial Intelligence applications is to manage the long-term societal impacts of this technology. Two central concerns that have emerged in this respect are that the optimized goals behind the data processing of AI applications usually remain opaque and the energy footprint of their data processing is growing quickly. This study thus explores how much people value the transparency and environmental sustainability of AI using the example of personal AI assistants. The results from (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  25. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  26. INDEX for volume 80, 2002.Eric Barnes, Neither Truth Nor Empirical Adequacy Explain, Matti Eklund, Deep Inconsistency, Barbara Montero, Harold Langsam, Self-Knowledge Externalism, Christine McKinnon Desire-Frustration, Moral Sympathy & Josh Parsons - 2002 - Australasian Journal of Philosophy 80 (4):545-548.
     
    Export citation  
     
    Bookmark  
  27.  43
    Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28.  18
    Towards a Questions-Centered Approach to Explainable Human-Robot Interaction.Glenda Hannibal & Felix Lindner - 2023 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions - Proceedings of Robophilosophy 2022. IOS Press. pp. 406-415.
    To address the tension between demands for more transparent AI systems and the aim to develop and design robots with apparent agency for smooth and intuitive human-robot interaction (HRI), we present in this paper an argument for why explainability in HRI would benefit from being question-centered. First, we review how explainability has been discussed in AI and HRI respectively, to then present the challenge in HRI to accommodate the requirement of transparency while also keeping up the appearance of the robot (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  29.  72
    The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence.Alison Duncan Kerr & Kevin Scharp - 2022 - Minds and Machines 32 (3):585-611.
    Artificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—_Explainable_ Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled _surveillance capitalism_ has resulted in humans quickly gaining the capability (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  6
    Nullius in Explanans: an ethical risk assessment for explainable AI.Luca Nannini, Diletta Huyskes, Enrico Panai, Giada Pistilli & Alessio Tartaro - 2025 - Ethics and Information Technology 27 (1):1-28.
    Explanations are conceived to ensure the trustworthiness of AI systems. Yet, relying solemnly on algorithmic solutions, as provided by explainable artificial intelligence (XAI), might fall short to account for sociotechnical risks jeopardizing their factuality and informativeness. To mitigate these risks, we delve into the complex landscape of ethical risks surrounding XAI systems and their generated explanations. By employing a literature review combined with rigorous thematic analysis, we uncover a diverse array of technical risks tied to the robustness, fairness, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  25
    Mapping the landscape of ethical considerations in explainable AI research.Luca Nannini, Marta Marchiori Manerba & Isacco Beretta - 2024 - Ethics and Information Technology 26 (3):1-22.
    With its potential to contribute to the ethical governance of AI, eXplainable AI (XAI) research frequently asserts its relevance to ethical considerations. Yet, the substantiation of these claims with rigorous ethical analysis and reflection remains largely unexamined. This contribution endeavors to scrutinize the relationship between XAI and ethical considerations. By systematically reviewing research papers mentioning ethical terms in XAI frameworks and tools, we investigate the extent and depth of ethical discussions in scholarly research. We observe a limited and often (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32.  12
    Interestingness elements for explainable reinforcement learning: Understanding agents' capabilities and limitations.Pedro Sequeira & Melinda Gervasio - 2020 - Artificial Intelligence 288 (C):103367.
  33.  17
    Deep learning models and the limits of explainable artificial intelligence.Jens Christian Bjerring, Jakob Mainz & Lauritz Munch - 2025 - Asian Journal of Philosophy 4 (1):1-26.
    It has often been argued that we face a trade-off between accuracy and opacity in deep learning models. The idea is that we can only harness the accuracy of deep learning models by simultaneously accepting that the grounds for the models’ decision-making are epistemically opaque to us. In this paper, we ask the following question: what are the prospects of making deep learning models transparent without compromising on their accuracy? We argue that the answer to this question depends on which (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Human centred explainable AI decision-making in healthcare.Catharina M. van Leersum & Clara Maathuis - 2025 - Journal of Responsible Technology 21 (C):100108.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  23
    Explanation–Question–Response dialogue: An argumentative tool for explainable AI.Federico Castagna, Peter McBurney & Simon Parsons - 2024 - Argument and Computation:1-23.
    Advancements and deployments of AI-based systems, especially Deep Learning-driven generative language models, have accomplished impressive results over the past few years. Nevertheless, these remarkable achievements are intertwined with a related fear that such technologies might lead to a general relinquishing of our lives’s control to AIs. This concern, which also motivates the increasing interest in the eXplainable Artificial Intelligence (XAI) research field, is mostly caused by the opacity of the output of deep learning systems and the way that it (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  6
    Hilma af Klint’s Astro-Physics “Predictions”, Explainable Somehow by Dr. Carl Jung.Cristian Horgos - 2024 - Open Journal of Philosophy 14 (4):995-1010.
    I’ve just discovered that the Abstract painting has similarities not only with the micro-cosmos (as it is stated in the book “Man and His Symbol” by dr. CG Jung) but also, which is very astonishing, with the macro-Cosmos. Shortly, Hilma af Klint painted in her “The Ten Largest” symbols that are amazingly similar with modern astro-physics pictures that were made, many decades later, by the Hubble Telescope (and were not available for human eyes on the af Klint times). The research (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello (eds.) - 2021
    No categories
     
    Export citation  
     
    Bookmark  
  38.  62
    AI employment decision-making: integrating the equal opportunity merit principle and explainable AI.Gary K. Y. Chan - forthcoming - AI and Society:1-12.
    Artificial intelligence tools used in employment decision-making cut across the multiple stages of job advertisements, shortlisting, interviews and hiring, and actual and potential bias can arise in each of these stages. One major challenge is to mitigate AI bias and promote fairness in opaque AI systems. This paper argues that the equal opportunity merit principle is an ethical approach for fair AI employment decision-making. Further, explainable AI can mitigate the opacity problem by placing greater emphasis on enhancing the understanding (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Aristotelian-scholastic logic and ordinary language: The" explainable" propositions.S. Di Liso - 2005 - Rivista di Filosofia Neo-Scolastica 97 (4):571-591.
     
    Export citation  
     
    Bookmark  
  40.  39
    Knowledge graphs as tools for explainable machine learning: A survey.Ilaria Tiddi & Stefan Schlobach - 2022 - Artificial Intelligence 302 (C):103627.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41.  26
    Review of “AI assurance: towards trustworthy, explainable, safe, and ethical AI” by Feras A. Batarseh and Laura J. Freeman, Academic Press, 2023. [REVIEW]Jialei Wang & Li Fu - 2024 - AI and Society 39 (6):3065-3066.
  42. Explaining the Computational Mind.Marcin Miłkowski - 2013 - MIT Press.
    In the book, I argue that the mind can be explained computationally because it is itself computational—whether it engages in mental arithmetic, parses natural language, or processes the auditory signals that allow us to experience music. All these capacities arise from complex information-processing operations of the mind. By analyzing the state of the art in cognitive science, I develop an account of computational explanation used to explain the capacities in question.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   107 citations  
  43. Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  44. Explaining delusions of control: The comparator model 20years on.Chris Frith - 2012 - Consciousness and Cognition 21 (1):52-54.
    Over the last 20 years the comparator model for delusions of control has received considerable support in terms of empirical studies. However, the original version clearly needs to be replaced by a model with a much greater degree of sophistication and specificity. Future developments are likely to involve the specification of the role of dopamine in the model and a generalisation of its explanatory power to the whole range of positive symptoms. However, we will still need to explain why symptoms (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   57 citations  
  45.  61
    Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  46.  62
    Explaining errors in children’s questions.Caroline F. Rowland - 2007 - Cognition 104 (1):106-134.
    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  47.  18
    Explaining variance in perceived research misbehavior: results from a survey among academic researchers in Amsterdam.Frans Oort, Lex Bouter, Brian Martinson, Joeri Tijdink & Tamarinde Haven - 2021 - Research Integrity and Peer Review 6 (1).
    BackgroundConcerns about research misbehavior in academic science have sparked interest in the factors that may explain research misbehavior. Often three clusters of factors are distinguished: individual factors, climate factors and publication factors. Our research question was: to what extent can individual, climate and publication factors explain the variance in frequently perceived research misbehaviors?MethodsFrom May 2017 until July 2017, we conducted a survey study among academic researchers in Amsterdam. The survey included three measurement instruments that we previously reported individual results of (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Explaining Attitudes: A Practical Approach to the Mind.Lynne Rudder Baker - 1995 - New York: Cambridge University Press.
    Explaining Attitudes offers an important challenge to the dominant conception of belief found in the work of such philosophers as Dretske and Fodor. According to this dominant view beliefs, if they exist at all, are constituted by states of the brain. Lynne Rudder Baker rejects this view and replaces it with a quite different approach - practical realism. Seen from the perspective of practical realism, any argument that interprets beliefs as either brain states or states of immaterial souls is a (...)
    Direct download  
     
    Export citation  
     
    Bookmark   49 citations  
  49. (1 other version)Explaining normativity: On rationality and the justification of reason.Joseph Raz - 1999 - Ratio 12 (4):354–379.
    Aspects of the world are normative in as much as they or their existence constitute reasons for persons, i.e. grounds which make certain beliefs, moods, emotions, intentions or actions appropriate or inappropriate. Our capacities to perceive and understand how things are, and what response is appropriate to them, and our ability to respond appropriately, make us into persons, i.e. creatures with the ability to direct their own life in accordance with their appreciation of themselves and their environment, and of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  50. Explaining Actions with Habits.Bill Pollard - 2006 - American Philosophical Quarterly 43 (1):57 - 69.
    From time to time we explain what people do by referring to their habits. We explain somebody’s putting the kettle on in the morning as done through “force of habit”. We explain somebody’s missing a turning by saying that she carried straight on “out of habit”. And we explain somebody’s biting her nails as a manifestation of “a bad habit”. These are all examples of what will be referred to here as habit explanations. Roughly speaking, they explain by referring to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   50 citations  
1 — 50 / 970