Results for 'Artificial Moral Agency'

970 found
Order:
  1. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human (...) error; and (3) 'Human-Like AMAs' programmed to understand and apply moral values in broadly the same way that we do, with a human-like moral psychology. Sections 2–4 then argue that each type of AMA generates unique control and alignment problems that have not been fully appreciated. Section 2 argues that Inhuman AMAs are likely to behave in inhumane ways that pose serious existential risks. Section 3 then contends that Better-Human AMAs run a serious risk of magnifying some sources of human moral error by reducing or eliminating others. Section 4 then argues that Human-Like AMAs would not only likely reproduce human moral failures, but also plausibly be highly intelligent, conscious beings with interests and wills of their own who should therefore be entitled to similar moral rights and freedoms as us. This generates what I call the New Control Problem: ensuring that humans and Human-Like AMAs exert a morally appropriate amount of control over each other. Finally, Section 5 argues that resolving the New Control Problem would, at a minimum, plausibly require ensuring what Hume and Rawls term ‘circumstances of justice’ between humans and Human-Like AMAs. But, I argue, there are grounds for thinking this will be profoundly difficult to achieve. I thus conclude on a skeptical note. Different approaches to developing ‘safe, ethical AI’ generate subtly different control and alignment problems that we do not currently know how to adequately resolve, and which may or may not be ultimately surmountable. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2. A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  3.  27
    A Theological Account of Artificial Moral Agency.Ximian Xu - 2023 - Studies in Christian Ethics 36 (3):642-659.
    This article seeks to explore the idea of artificial moral agency from a theological perspective. By drawing on the Reformed theology of archetype-ectype, it will demonstrate that computational artefacts are the ectype of human moral agents and, consequently, have a partial moral agency. In this light, human moral agents mediate and extend their moral values through computational artefacts, which are ontologically connected with humans and only related to limited particular moral issues. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4. ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5.  68
    Karol Wojtyla on Artificial Moral Agency andMoral Accountability.Richard A. Spinello - 2011 - The National Catholic Bioethics Quarterly 11 (3):469-491.
    As the notion of artificial moral agency gains popularity among ethicists, it threatens the unique status of the human person as a responsible moral agent. The philosophy of ontocentrism, popularized by Luciano Floridi, argues that biocentrism is too restrictive and must yield to a new philosophical vision that endows all beings with some intrinsic value. Floridi’s macroethics also regards more sophisticated digital entities such as robots as accountable moral agents. To refute these principles, this paper (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  77
    Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI).Alexis Fritz, Wiebke Brandt, Henner Gimpel & Sarah Bayer - 2020 - De Ethica 6 (1):3-22.
    Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  7. Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency.Muntean Ioan & Don Howard - 2017 - In Thomas M. Powers (ed.), Philosophy and Computing: Essays in epistemology, philosophy of mind, logic, and ethics. Cham: Springer.
    This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to “read” the behavior of human actors, available as collected data, and to categorize their moral behavior based on moral patterns (...)
     
    Export citation  
     
    Bookmark   1 citation  
  8. Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. The Problem Of Moral Agency In Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - 2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW).
    Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named ‘augmented intelligence’. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged ‘Cybernetics’ with a view of a (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  10.  74
    Can a Robot Pursue the Good? Exploring Artificial Moral Agency.Amy Michelle DeBaets - 2014 - Journal of Evolution and Technology 24 (3):76-86.
    In this essay I will explore an understanding of the potential moral agency of robots; arguing that the key characteristics of physical embodiment; adaptive learning; empathy in action; and a teleology toward the good are the primary necessary components for a machine to become a moral agent. In this context; other possible options will be rejected as necessary for moral agency; including simplistic notions of intelligence; computational power; and rule-following; complete freedom; a sense of God; (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  11. Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘ (...) moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions ofagencyor to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion ofartificial moral responsibility. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  12. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   72 citations  
  13.  62
    Review of Carlos Montemayor's "The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment". London, 2023. Bloomsbury Academic, Bloomsbury Publishing. [REVIEW]Diego Morales - 2023 - Journal of Applied Philosophy 40 (4):766-768.
    Book review of Carlos Montemayor's "The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment" || Reseña del libro "The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment", escrito por Carlos Montemayor.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. [REVIEW]Mark Coeckelbergh - 2009 - AI and Society 24 (2):181-189.
  16. Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  17. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  18.  45
    “Virtue Engineering” and Moral Agency: Will Post-Humans Still Need the Virtues?Fabrice Jotterand - 2011 - American Journal of Bioethics Neuroscience 2 (4):3-9.
    It is not the purpose of this article to evaluate the techno-scientific claims of the transhumanists. Instead, I question seriously the nature of the ethics and morals they claim can, or soon will, be manipulated artificially. I argue that while the possibility to manipulate human behavior via emotional processes exists, the question still remains concerning the content of morality. In other words, neural moral enhancement does not capture the fullness of human moral psychology, which includes moral capacity (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  19. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  21.  72
    A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights from the Science of Heroism in Humans.Travis J. Wiltshire - 2015 - Minds and Machines 25 (1):57-71.
    The growing field of machine morality has becoming increasingly concerned with how to develop artificial moral agents. However, there is little consensus on what constitutes an ideal moral agent let alone an artificial one. Leveraging a recent account of heroism in humans, the aim of this paper is to provide a prospective framework for conceptualizing, and in turn designing ideal artificial moral agents, namely those that would be considered heroic robots. First, an overview of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  22. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2021 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  23.  72
    A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  24. Manufacturing Morality A general theory of moral agency grounding computational implementations: the ACTWith model.Jeffrey White - 2013 - In Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   35 citations  
  26.  63
    Presuppositions of Collective Moral Agency: Analogy, Architectonics, Justice, and Casuistry.David Ardagh - 2012 - Philosophy of Management 11 (2):5-28.
    This is the second of three papers with the overall title: “A Quasi-Personal Alternative to Some Anglo-American Pluralist Models of Organisations: Towards an Analysis of Corporate Self-Governance for Virtuous Organisations”.1 In the first paper, entitled: “Organisations as quasi-personal entities: from ‘governing’ of the self to organisational ‘self’-governance: a Neo-Aristotelian quasi-personal model of organisations”, the artificial corporate analogue of a natural person sketched there, was said to have quasi-directive, quasi-operational and quasi-enabling/resource-provision capacities. Its use of these capacities following joint deliberation (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  27. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  28.  31
    A Critique of Some Anglo-American Models of Collective Moral Agency in Business.David Ardagh - 2013 - Philosophy of Management 12 (3):5-25.
    The paper completes a trilogy of papers, under the title: “A Quasi-Personal Alternative to Some Anglo-American Pluralist Models of Organisations: Towards an Analysis of Corporate Self-Governance for Virtuous Organisations”. The first two papers of the three are published in Philosophy of Management, Volumes 10,3 and 11,2. This last paper argues that three dominant Anglo-American organisational theories which see themselves as “business ethics-friendly,” are less so than they seem. It will be argued they present obstacles to collective corporate moral (...). They are: 1) the dominant “soft pluralist” organisational theory of Bolman and Deal, published in 1984 and more recently expressed in Reframing Organisations: Artistry, Choice, and Leadership, 5th edition, 2013, which is based on “reframing,” and which we will call reframing theory (RT); 2) the Business Ethics deployment of Stakeholder Management Theory (SMT) associated with R. Edward Freeman, and several colleagues, dominant in the same period (1984-); and 3) to a much lesser degree, an adapted version of SMT in the Integrated Social Contract Theory (ISCT) of Donaldson and Dunfee (Ties That Bind, Harvard Business School Press (1999)). This paper suggests a return, from RT, SMT, and ISCT, to an older “participative-structuralist” Neo-Aristotelian virtue-ethics based account, based on an analogy between “natural” persons, and organisations as “artificial” persons, with natural persons seen as “flat” architectonically related sets of capacity in complementary relation, and organisations as even flatter architectonic hierarchies of groups of incumbents in roles. This quasi-personal model preserves the possibility of corporate moral agency and some hierarchical and lateral order between leadership groups and other functional roles in the ethical governance of the whole corporation, as a collective moral agent. The quasi-person model would make possible assigning degrees of responsibility and a more coherent interface of Ethics, Organisational Ethics, and Management Theory; the reconfiguring of the place of business in society; an alternate ethico-political basis for Corporate Social Responsibility; and a rethinking of the design of the business corporate form, within the practice and institutions of business, but embedded in a state as representing the community. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  30.  86
    A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics.Dmytro Mykhailov - 2021 - Human Affairs 31 (2):149-164.
    Contemporary medical diagnostics has a dynamic moral landscape, which includes a variety of agents, factors, and components. A significant part of this landscape is composed of information technologies that play a vital role in doctors’ decision-making. This paper focuses on the so-called Intelligent Decision-Support System that is widely implemented in the domain of contemporary medical diagnosis. The purpose of this article is twofold. First, I will show that the IDSS may be considered a moral agent in the practice (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  31. Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  32. Kantian Notion of freedom and Autonomy of Artificial Agency.Manas Sahu - 2021 - Prometeica - Revista De Filosofía Y Ciencias 23:136-149.
    The objective of this paper is to provide a critical analysis of the Kantian notion of freedom (especially the problem of the third antinomy and its resolution in the critique of pure reason); its significance in the contemporary debate on free-will and determinism, and the possibility of autonomy of artificial agency in the Kantian paradigm of autonomy. Kant's resolution of the third antinomy by positing the ground in the noumenal self resolves the problem of antinomies; however, invites an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33.  31
    The Possibilities of Machine Morality.Jonathan Pengelly - 2023 - Dissertation, Victoria University of Wellington
    This thesis shows morality to be broader and more diverse than its human instantiation. It uses the idea of machine morality to argue for this position. Specifically, it contrasts the possibilities open to humans with those open to machines to meaningfully engage with the moral domain. -/- This contrast identifies distinctive characteristics of human morality, which are not fundamental to morality itself, but constrain our thinking about morality and its possibilities. It also highlights the inherent potential of machine morality (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  34.  64
    "I don't trust you, you faker!" On Trust, Reliance, and Artificial Agency.Fabio Fossa - 2019 - Teoria 39 (1):63-80.
    The aim of this paper is to clarify the extent to which relationships between Human Agents (HAs) and Artificial Agents (AAs) can be adequately defined in terms of trust. Since such relationships consist mostly in the allocation of tasks to technological products, particular attention is paid to the notion of delegation. In short, I argue that it would be more accurate to describe direct relationships between HAs and AAs in terms of reliance, rather than in terms of trust. However, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  81
    Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  37. Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
    Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   193 citations  
  38. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Action and Agency in Artificial Intelligence: A Philosophical Critique.Justin Nnaemeka Onyeukaziri - 2023 - Philosophia: International Journal of Philosophy (Philippine e-journal) 24 (1):73-90.
    The objective of this work is to explore the notion of “action” and “agency” in artificial intelligence (AI). It employs a metaphysical notion of action and agency as an epistemological tool in the critique of the notion of “action” and “agency” in artificial intelligence. Hence, both a metaphysical and cognitive analysis is employed in the investigation of the quiddity and nature of action and agency per se, and how they are, by extension employed in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  41.  40
    Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective.Qin Zhu, Tom Williams, Blake Jackson & Ruchen Wen - 2020 - Science and Engineering Ethics 26 (5):2511-2526.
    Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  42.  42
    Moral Status.Mary Anne Warren - 2003 - In R. G. Frey & Christopher Heath Wellman (eds.), A Companion to Applied Ethics. Malden, MA: Wiley-Blackwell. pp. 439–450.
    This chapter contains sections titled: What is Moral Status? The Moral Agency Theory The Genetic Humanity Theory The Sentience Theory The Organic Life Theory Two Relationship‐based Theories Combining these Criteria Principles of Moral Status Human Zygotes, Embryos, and Fetuses Are All Animals Equal? Machines and Artificial Life‐forms Conclusion.
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  43.  60
    Anthropological Crisis or Crisis in Moral Status: a Philosophy of Technology Approach to the Moral Consideration of Artificial Intelligence.Joan Llorca Albareda - 2024 - Philosophy and Technology 37 (1):1-26.
    The inquiry into the moral status of artificial intelligence (AI) is leading to prolific theoretical discussions. A new entity that does not share the material substrate of human beings begins to show signs of a number of properties that are nuclear to the understanding of moral agency. It makes us wonder whether the properties we associate with moral status need to be revised or whether the new artificial entities deserve to enter within the circle (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  44. Agency as difference-making: causal foundations of moral responsibility.Johannes Himmelreich - 2015 - Dissertation, London School of Economics and Political Science
    We are responsible for some things but not for others. In this thesis, I investigate what it takes for an entity to be responsible for something. This question has two components: agents and actions. I argue for a permissive view about agents. Entities such as groups or artificially intelligent systems may be agents in the sense required for responsibility. With respect to actions, I argue for a causal view. The relation in virtue of which agents are responsible for actions is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  45. Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46.  36
    The Democratic Inclusion of Artificial Intelligence? Exploring the Patiency, Agency and Relational Conditions for Demos Membership.Ludvig Beckman & Jonas Hultin Rosenberg - 2022 - Philosophy and Technology 35 (2):1-24.
    Should artificial intelligences ever be included as co-authors of democratic decisions? According to the conventional view in democratic theory, the answer depends on the relationship between the political unit and the entity that is either affected or subjected to its decisions. The relational conditions for inclusion as stipulated by the all-affected and all-subjected principles determine the spatial extension of democratic inclusion. Thus, AI qualifies for democratic inclusion if and only if AI is either affected or subjected to decisions by (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  48.  73
    The anachronism of moral individualism and the responsibility of extended agency.F. Allan Hanson - 2008 - Phenomenology and the Cognitive Sciences 7 (3):415-424.
    Recent social theory has departed from methodological individualism’s explanation of action according to the motives and dispositions of human individuals in favor of explanation in terms of broader agencies consisting of both human and nonhuman elements described as cyborgs, actor-networks, extended agencies, or distributed cognition. This paper proposes that moral responsibility for action also be vested in extended agencies. It advances a consequentialist view of responsibility that takes moral responsibility to be a species of causal responsibility, and it (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49.  72
    Moral control and ownership in AI systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  50.  64
    Artificial Dispositions: Investigating Ethical and Metaphysical Issues.William A. Bauer & Anna Marmodoro (eds.) - 2023 - New York: Bloomsbury.
    We inhabit a world not only full of natural dispositions independent of human design, but also artificial dispositions created by our technological prowess. How do these dispositions, found in automation, computation, and artificial intelligence applications, differ metaphysically from their natural counterparts? This collection investigates artificial dispositions: what they are, the roles they play in artificial systems, and how they impact our understanding of the nature of reality, the structure of minds, and the ethics of emerging technologies. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 970