Results for 'moral AIs, hybrid system, moral disagreement problem, opacity problem'

963 found
Order:
  1. A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  61
    Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care.Ryan Marshall Felder - 2021 - Hastings Center Report 51 (4):38-45.
    The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity—because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient to justify implementation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  3.  58
    The epistemic opacity of autonomous systems and the ethical consequences.Mihály Héder - 2023 - AI and Society 38 (5):1819-1827.
    This paper takes stock of all the various factors that cause the design-time opacity of autonomous systems behaviour. The factors include embodiment effects, design-time knowledge gap, human factors, emergent behaviour and tacit knowledge. This situation is contrasted with the usual representation of moral dilemmas that assume perfect information. Since perfect information is not achievable, the traditional moral dilemma representations are not valid and the whole problem of ethical autonomous systems design proves to be way more empirical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  4. Moral Relativism and Moral Disagreement.Jussi Suikkanen - 2024 - In Maria Baghramian, J. Adam Carter & Rach Cosker-Rowland, Routledge Handbook of Philosophy of Disagreement. New York, NY: Routledge.
    This chapter focuses on the connection between moral disagreement and moral relativism. Moral relativists, generally speaking, think both (i) that there is no unique objectively correct moral standard and (ii) that the rightness and wrongness of an action depends in some way on a moral standard accepted by some group or an individual. This chapter will first consider the metaphysical and epistemic arguments for moral relativism that begin from the premise that there is (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. Find the Gap: AI, Responsible Agency and Vulnerability.Shannon Vallor & Tillmann Vierkant - 2024 - Minds and Machines 34 (3):1-23.
    The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  6. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta, Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  7.  28
    The AI-design regress.Pamela Robinson - 2025 - Philosophical Studies 182 (1):229-255.
    How should we design AI systems that make moral decisions that affect us? When there is disagreement about which moral decisions should be made and which methods would produce them, we should avoid arbitrary design choices. However, I show that this leads to a regress problem similar to the one metanormativists face involving higher orders of uncertainty. I argue that existing strategies for handling this parallel problem give verdicts about where to stop in the regress (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. Moral disagreement and artificial intelligence.Pamela Robinson - 2024 - AI and Society 39 (5):2425-2438.
    Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. _Moral solutions_ apply a moral theory or related principles and largely ignore (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Explaining Disagreement: A Problem for (Some) Hybrid Expressivists.John Eriksson - 2015 - Pacific Philosophical Quarterly 96 (1):39-53.
    Hybrid expressivists depart from pure expressivists by claiming that moral sentences express beliefs and desires. Daniel Boisvert and Michael Ridge, two prominent defenders of hybrid views, also depart from pure expressivists by claiming that moral sentences express general attitudes rather than an attitude towards the subject of the sentence. This article argues that even if the shift to general attitudes helps solve some of the traditional problems associated with pure expressivism, a view like Ridge's, according to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  10.  31
    Artificial Intelligence: The Opacity of Concepts in the Uncertainty of Realities.Александр Иванович Агеев - 2022 - Russian Journal of Philosophical Sciences 65 (1):27-43.
    The development of the systems of artificial intelligence (AI) and digital transformation in general lead to the formation of multitude of autonomous agents of artificial and mixed genealogy, as well as to complex structures in the information and regulatory environment with many opportunities and pathologies and a growing level of uncertainty in making managerial decisions. The situation is complicated by the continuing plurality of understanding of the essence of AI systems. The modern expanded understanding of AI goes back to ideas (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11.  98
    Challenges of Aligning Artificial Intelligence with Human Values.Margit Sutrop - 2020 - Acta Baltica Historiae Et Philosophiae Scientiarum 8 (2):54-72.
    As artificial intelligence systems are becoming increasingly autonomous and will soon be able to make decisions on their own about what to do, AI researchers have started to talk about the need to align AI with human values. The AI ‘value alignment problem’ faces two kinds of challenges—a technical and a normative one—which are interrelated. The technical challenge deals with the question of how to encode human values in artificial intelligence. The normative challenge is associated with two questions: “Which (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  12.  8
    The AI-design regress.Pamela Robinson - 2025 - Philosophical Studies 182 (1):229-255.
    How should we design AI systems that make moral decisions that affect us? When there is disagreement about which moral decisions should be made and which methods would produce them, we should avoid arbitrary design choices. However, I show that this leads to a regress problem similar to the one metanormativists face involving higher orders of uncertainty. I argue that existing strategies for handling this parallel problem give verdicts about where to stop in the regress (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13.  42
    Agree to disagree: the symmetry of burden of proof in human–AI collaboration.Karin Rolanda Jongsma & Martin Sand - 2022 - Journal of Medical Ethics 48 (4):230-231.
    In their paper ‘Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts’, Kempt and Nagel discuss the use of medical AI systems and the resulting need for second opinions by human physicians, when physicians and AI disagree, which they call the rule of disagreement.1 The authors defend RoD based on three premises: First, they argue that in cases of disagreement in medical practice, there is an increased burden of proof for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Aligning artificial intelligence with moral intuitions: an intuitionist approach to the alignment problem.Dario Cecchini, Michael Pflanzer & Veljko Dubljevic - 2024 - AI and Ethics:1-11.
    As artificial intelligence (AI) continues to advance, one key challenge is ensuring that AI aligns with certain values. However, in the current diverse and democratic society, reaching a normative consensus is complex. This paper delves into the methodological aspect of how AI ethicists can effectively determine which values AI should uphold. After reviewing the most influential methodologies, we detail an intuitionist research agenda that offers guidelines for aligning AI applications with a limited set of reliable moral intuitions, each underlying (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective.Gianfranco Basti & Giuseppe Vitiello - 2023 - In Raffaela Giovagnoli & Robert Lowe, The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73.
    In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  16. AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.Richard Volkman & Katleen Gabriels - 2023 - Science and Engineering Ethics 29 (2):1-14.
    Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  17.  38
    Is AI a Problem for Forward Looking Moral Responsibility? The Problem Followed by a Solution.Fabio Tollon - 2022 - In Communications in Computer and Information Science. Cham: pp. 307-318.
    Recent work in AI ethics has come to bear on questions of responsibility. Specifically, questions of whether the nature of AI-based systems render various notions of responsibility inappropriate. While substantial attention has been given to backward-looking senses of responsibility, there has been little consideration of forward-looking senses of responsibility. This paper aims to plug this gap, and will concern itself with responsibility as moral obligation, a particular kind of forward-looking sense of responsibility. Responsibility as moral obligation is predicated (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  18. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, and (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  19.  15
    Morality, Law, and Practical Reason.Enrique Benjamin R. Fernando Iii - 2021 - Philosophia: International Journal of Philosophy (Philippine e-journal) 22 (2):186-204.
    Morality is a normative system of guidance that figures into practical reason by telling people what to do in various situations. The problem, however, is that morality has inherent gaps that often render it inefficacious. First, it may be indeterminate due to the high level of generality in which its principles are formulated. Second, moral terms such as ‘good’ and ‘right’ may be so vague that they fail to specify the requisite behavior. And third, its subjective aspect, which (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  20.  56
    Hybrid collective intelligence in a human–AI society.Marieke M. M. Peeters, Jurriaan van Diggelen, Karel van den Bosch, Adelbert Bronkhorst, Mark A. Neerincx, Jan Maarten Schraagen & Stephan Raaijmakers - 2021 - AI and Society 36 (1):217-238.
    Within current debates about the future impact of Artificial Intelligence on human society, roughly three different perspectives can be recognised: the technology-centric perspective, claiming that AI will soon outperform humankind in all areas, and that the primary threat for humankind is superintelligence; the human-centric perspective, claiming that humans will always remain superior to AI when it comes to social and societal aspects, and that the main threat of AI is that humankind’s social nature is overlooked in technological designs; and the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  21. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - 2022 - AI and Society (2022):Online.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  22. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  53
    The Moral Status of AI Entities.Joan Llorca Albareda, Paloma García & Francisco Lara - 2023 - In Francisco Lara & Jan Deckers, Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 59-83.
    The emergence of AI is posing serious challenges to standard conceptions of moral status. New non-biological entities are able to act and make decisions rationally. The question arises, in this regard, as to whether AI systems possess or can possess the necessary properties to be morally considerable. In this chapter, we have undertaken a systematic analysis of the various debates that are taking place about the moral status of AI. First, we have discussed the possibility that AI systems, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  61
    Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25. Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  26.  36
    Aesthetic Value and the AI Alignment Problem.Alice C. Helliwell - 2024 - Philosophy and Technology 37 (4):1-21.
    The threat from possible future superintelligent AI has given rise to discussion of the so-called “value alignment problem”. This is the problem of how to ensure artificially intelligent systems align with human values, and thus (hopefully) mitigate risks associated with them. Naturally, AI value alignment is often discussed in relation to morally relevant values, such as the value of human lives or human wellbeing. However, solutions to the value alignment problem target all human values, not only morally (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement.Trystan S. Goetze - 2022 - Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22).
    When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design autonomous systems, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  28. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  29. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea, Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  30.  24
    Deference to Opaque Systems and Morally Exemplary Decisions.James Fritz - forthcoming - AI and Society:1-13.
    Many have recently argued that there are weighty reasons against making high-stakes decisions solely on the basis of recommendations from artificially intelligent (AI) systems. Even if deference to a given AI system were known to reliably result in the right action being taken, the argument goes, that deference would lack morally important characteristics: the resulting decisions would not, for instance, be based on an appreciation of right-making reasons. Nor would they be performed from moral virtue; nor would they have (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  31.  86
    Platitudes and Opacity: Explaining Philosophical Uncertainty.John Eriksson & Ragnar Francén - 2024 - Belgrade Philosophical Annual 37 (1):81-103.
    In The Moral Problem, Smith defended an analysis of moral judgments based on a number of platitudes about morality. The platitudes are supposed to constitute conceptual constraints which an analysis of moral terms must capture “on pain of not being an analysis of moral terms at all”. This paper discusses this philosophical methodology in light of the fact that the propositions identified as platitudes are not obvious truths – they are propositions we can be uncertain (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  32. Vicarious liability: a solution to a problem of AI responsibility?Matteo Pascucci & Daniela Glavaničová - 2022 - Ethics and Information Technology 24 (3):1-11.
    Who is responsible when an AI machine causes something to go wrong? Or is there a gap in the ascription of responsibility? Answers range from claiming there is a unique responsibility gap, several different responsibility gaps, or no gap at all. In a nutshell, the problem is as follows: on the one hand, it seems fitting to hold someone responsible for a wrong caused by an AI machine; on the other hand, there seems to be no fitting bearer of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  33. Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  9
    Moralities of drone violence.Christian Enemark - 2023 - Edinburgh: Edinburgh University Press.
    Moral uncertainty surrounding the use of armed drones has been a persistant problem for more than two decades. In response, [this book] provides greater clarity by investigating the ways in which violent drone use is seen as just or unjust in a variety of circumstances. Adopting a broad-based approach to normative inquiry, this book organizes moral ideas around a series of concepts of drone violence, including warfare, violent law enforcement, tele-intimate violence and violence devolved from humans to (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  35.  44
    “I’m afraid I can’t let you do that, Doctor”: meaningful disagreements with AI in medical contexts.Hendrik Kempt, Jan-Christoph Heilinger & Saskia K. Nagel - forthcoming - AI and Society:1-8.
    This paper explores the role and resolution of disagreements between physicians and their diagnostic AI-based decision support systems. With an ever-growing number of applications for these independently operating diagnostic tools, it becomes less and less clear what a physician ought to do in case their diagnosis is in faultless conflict with the results of the DSS. The consequences of such uncertainty can ultimately lead to effects detrimental to the intended purpose of such machines, e.g. by shifting the burden of proof (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  36.  45
    Presenting a hybrid model in social networks recommendation system architecture development.Abolfazl Zare, Mohammad Reza Motadel & Aliakbar Jalali - 2020 - AI and Society 35 (2):469-483.
    There are many studies conducted on recommendation systems, most of which are focused on recommending items to users and vice versa. Nowadays, social networks are complicated due to carrying vast arrays of data about individuals and organizations. In today’s competitive environment, companies face two significant problems: supplying resources and attracting new customers. Even the concept of supply-chain management in a virtual environment is changed. In this article, we propose a new and innovative combination approach to recommend organizational people in social (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37. (1 other version)Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich, On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38. AISC 17 Talk: The Explanatory Problems of Deep Learning in Artificial Intelligence and Computational Cognitive Science: Two Possible Research Agendas.Antonio Lieto - 2018 - In Proceedings of AISC 2017.
    Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the classical (...) of opacity in artificial neural networks (ANNs) explodes with the adoption of current Deep Learning techniques [LeCun, Bengio, Hinton, 2015]. In this paper we argue that the explanatory deficit of such techniques represents an important problem, that limits their adoption in the cognitive modelling and computational cognitive science arena. In particular we will show how the current attempts of providing explanations of the deep nets behaviour (see e.g. [Ritter et al. 2017] are not satisfactory. As a possibile way out to this problem, we present two different research strategies. The first strategy aims at dealing with the opacity problem by providing a more abstract interpretation of neural mechanisms and representations. This approach is adopted, for example, by the biologically inspired SPAUN architecture [Eliasmith et al., 2012] and by other proposals suggesting, for example, the interpretation of neural networks in terms of the Conceptual Spaces framework [Gärdenfors 2000, Lieto, Chella and Frixione, 2017]. All such proposals presuppose that the neural level of representation can be considered somehow irrelevant for attacking the problem of explanation [Lieto, Lebiere and Oltramari, 2017]. In our opinion, pursuing this research direction can still preserve the use of deep learning techniques in artificial cognitive models provided that novel and additional results in terms of “transparency” are obtained. The second strategy is somehow at odds with respect to the previous one and tries to address the explanatory issue by avoiding to directly solve the “opacityproblem. In this case, the idea is that one of resorting to pre-compiled plausible explanatory models of the word used in combination with deep-nets (see e.g. [Augello et al. 2017]). We argue that this research agenda, even if does not directly fits the explanatory needs of Computational Cognitive Science, can still be useful to provide results in the area of applied AI aiming at shedding light on the models of interaction between low level and high level tasks (e.g. between perceptual categorization and explanantion) in artificial systems. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  39.  72
    Art and the ‘Morality System’: The Case of Don Giovanni.Genia Schönbaumsfeld - 2013 - European Journal of Philosophy 23 (4):1025-1043.
    Mozart's great opera, Don Giovanni, poses a number of significant philosophical and aesthetic challenges, and yet it remains, for the most part, little discussed by contemporary philosophers. A notable exception to this is Bernard Williams's important paper, ‘Don Juan as an Idea’, which contains an illuminating discussion of Kierkegaard's ground-breaking interpretation of the opera, ‘The Immediate Erotic Stages or the Musical-Erotic’, in Either/Or. Kierkegaard's pseudonymous author's approach here is, in some respects, reminiscent of a currently rather fashionable narrative-inspired moral (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40.  70
    AI employment decision-making: integrating the equal opportunity merit principle and explainable AI.Gary K. Y. Chan - forthcoming - AI and Society.
    Artificial intelligence tools used in employment decision-making cut across the multiple stages of job advertisements, shortlisting, interviews and hiring, and actual and potential bias can arise in each of these stages. One major challenge is to mitigate AI bias and promote fairness in opaque AI systems. This paper argues that the equal opportunity merit principle is an ethical approach for fair AI employment decision-making. Further, explainable AI can mitigate the opacity problem by placing greater emphasis on enhancing the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  42.  90
    Moral dilemmas in self-driving cars.Chiara Lucifora, Giorgio Mario Grasso, Pietro Perconti & Alessio Plebe - 2020 - Rivista Internazionale di Filosofia e Psicologia 11 (2):238-250.
    : Autonomous driving systems promise important changes for future of transport, primarily through the reduction of road accidents. However, ethical concerns, in particular, two central issues, will be key to their successful development. First, situations of risk that involve inevitable harm to passengers and/or bystanders, in which some individuals must be sacrificed for the benefit of others. Secondly, and identification responsible parties and liabilities in the event of an accident. Our work addresses the first of these ethical problems. We are (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  43. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Common morality: deciding what to do.Bernard Gert - 2004 - New York: Oxford University Press.
    Moral problems do not always come in the form of great social controversies. More often, the moral decisions we make are made quietly, constantly, and within the context of everyday activities and quotidian dilemmas. Indeed, these smaller decisions are based on a moral foundation that few of us ever stop to think about but which guides our every action. Here distinguished philosopher Bernard Gert presents a clear and concise introduction to what he calls "common morality" -- the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  45. AI Meets Mindfulness: Redefining Spirituality and Meditation in the Digital Age.R. L. Tripathi - 2025 - The Voice of Creative Research 7 (1):10.
    The combination of spirituality, meditation, and artificial intelligence (AI) has promising potential to expand people’s well-being using technology-based meditation. Proper meditation originates from Zen Buddhism and Patanjali’s Yoga Sutras and focuses on inner peace and intensified consciousness which elective personal disposition. AI, in turn, brings master means of delivering those practices in the form of self-improving systems that customize and make access to them easier. However, such an integration brings major philosophical and ethical issues into question, including the genuineness of (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  46.  7
    Determining Argumentative Dispute Resolution Reveals Deep Disagreement Over Harassment Issue (A Case-Study of a Discussion in the Russian Parliament).Elena Lisanyuk - 2022 - Studia Humana 11 (3-4):30-45.
    In 2018, three journalists accused one of the Members of the Russian Parliament of harassment at workplace. Many influential persons of the Russian elite engaged themselves in the public discussion of the conflict. We studied that high-profiled discussion using a hybrid method merging human- and logic-oriented approaches in argumentation studies. The method develops ideas of the new dialectics, the argumentation logic and the logical-cognitive approach to argumentation, on which is based the algorithm for determining of dispute resolution by aggregating (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. When Something Goes Wrong: Who is Responsible for Errors in ML Decision-making?Andrea Berber & Sanja Srećković - 2023 - AI and Society 38 (2):1-13.
    Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  48. Mapping the Stony Road toward Trustworthy AI: Expectations, Problems, Conundrums.Gernot Rieder, Judith Simon & Pak-Hang Wong - 2021 - In Marcello Pelillo & Teresa Scantamburlo, Machines We Trust: Perspectives on Dependable Ai. MIT Press.
    The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  21
    Philosophy, Governance and Law in the System of Social Action: Moral and Instrumental Problems of Genetic Research.Vladimir I. Przhilenskiy & Пржиленский Владимир Игоревич - 2024 - RUDN Journal of Philosophy 28 (1):244-259.
    The research analyzes the process of formation of the ethics committee as a new institution in the system of regulation of genetic research. The external factors of this process are the increasing digitalization of medical and research practices, as well as the special situation that is developing in the field of genomic research and the use of genetic technologies, where issues of philosophy, jurisprudence and administration have generated many fundamentally new, and sometimes unexpected contexts. The author shows the similarity and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 963