Results for 'Artificial agents'

957 found
Order:
  1. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  2.  95
    Artificial agents among us: Should we recognize them as agents proper?Migle Laukyte - 2017 - Ethics and Information Technology 19 (1):1-17.
    In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  3.  63
    Artificial agents, good care, and modernity.Mark Coeckelbergh - 2015 - Theoretical Medicine and Bioethics 36 (4):265-277.
    When is it ethically acceptable to use artificial agents in health care? This article articulates some criteria for good care and then discusses whether machines as artificial agents that take over care tasks meet these criteria. Particular attention is paid to intuitions about the meaning of ‘care’, ‘agency’, and ‘taking over’, but also to the care process as a labour process in a modern organizational and financial-economic context. It is argued that while there is in principle (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  4. Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  5. Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  6. Artificial agents - personhood in law and philosophy.Samir Chopra - manuscript
    Thinking about how the law might decide whether to extend legal personhood to artificial agents provides a valuable testbed for philosophical theories of mind. Further, philosophical and legal theorising about personhood for artificial agents can be mutually informing. We investigate two case studies, drawing on legal discussions of the status of artificial agents. The first looks at the doctrinal difficulties presented by the contracts entered into by artificial agents. We conclude that it (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  7. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  37
    Artificial agents’ explainability to support trust: considerations on timing and context.Guglielmo Papagni, Jesse de Pagter, Setareh Zafari, Michael Filzmoser & Sabine T. Koeszegi - 2023 - AI and Society 38 (2):947-960.
    Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  9. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility (...)
    Direct download (17 more)  
     
    Export citation  
     
    Bookmark   296 citations  
  10. Artificial agents and their moral nature.Luciano Floridi - 2014 - In Peter Kroes (ed.), The moral status of technical artefacts. Springer. pp. 185–212.
    Artificial agents, particularly but not only those in the infosphere Floridi (Information – A very short introduction. Oxford University Press, Oxford, 2010a), extend the class of entities that can be involved in moral situations, for they can be correctly interpreted as entities that can perform actions with good or evil impact (moral agents). In this chapter, I clarify the concepts of agent and of artificial agent and then distinguish between issues concerning their moral behaviour vs. issues (...)
     
    Export citation  
     
    Bookmark   2 citations  
  11.  7
    Can artificial agents act? Conceptual costellation for a de-humanized theory of action.Francesco Striano - 2024 - Scienza E Filosofia 31:224-244.
    Can artificial agents act? Conceptual constellation for a de-humanised theory of action This paper embarks on an exploration of the concept of agency, traditionally ascribed to humans, in the context of artificial intelligence (AI). In the first two sections, it challenges the conventional dichotomy of human agency and non- human instrumentality, arguing that advancements in technology have blurred these boundaries. In the third section, the paper introduces the reader to the philosophical perspective of new materialism, which assigns (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12. Artificial agents and the expanding ethical circle.Steve Torrance - 2013 - AI and Society 28 (4):399-414.
    I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and ethics, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  13. The epistemological foundations of artificial agents.Nicola Lacey & M. Lee - 2003 - Minds and Machines 13 (3):339-365.
    A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  14.  70
    This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  15. Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16.  45
    Artificial agents in social cognitive sciences.Thierry Chaminade & Jessica K. Hodgins - 2006 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 7 (3):347-353.
  17. Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   61 citations  
  18.  51
    Attributions toward Artificial Agents in a modified Moral Turing Test.Eyal Aharoni, Sharlene Fernandes, Daniel Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias & Victor Crespo - 2024 - Scientific Reports 14 (8458):1-11.
    Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  52
    Social Cognition and Artificial Agents.Anna Strasser - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 106-114.
    Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificial agents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  20.  74
    Social Cognition and Artificial Agents.Anna Strasser - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 106-114.
    Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificial agents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Modeling artificial agents’ actions in context – a deontic cognitive event ontology.Miroslav Vacura - 2020 - Applied ontology 15 (4):493-527.
    Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  22.  57
    A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents.Guglielmo Papagni & Sabine Koeszegi - 2021 - Minds and Machines 31 (4):505-534.
    Artificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  23.  99
    Trust and multi-agent systems: applying the diffuse, default model of trust to experiments involving artificial agents[REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  24.  38
    The Epistemological Foundations of Artificial Agents.Nick J. Lacey & M. H. Lee - 2003 - Minds and Machines 13 (3):339-365.
    A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  26.  38
    Hiding Behind Machines: Artificial Agents May Help to Evade Punishment.Till Feier, Jan Gogoll & Matthias Uhl - 2022 - Science and Engineering Ethics 28 (2):1-19.
    The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  27. On the Moral Equality of Artificial Agents.Christopher Wareham - 2011 - International Journal of Technoethics 2 (1):35-42.
    Artificial agents such as robots are performing increasingly significant ethical roles in society. As a result, there is a growing literature regarding their moral status with many suggesting it is justified to regard manufactured entities as having intrinsic moral worth. However, the question of whether artificial agents could have the high degree of moral status that is attributed to human persons has largely been neglected. To address this question, the author developed a respect-based account of the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  28.  34
    How should artificial agents make risky choices on our behalf?Johanna Thoma - 2021 - LSE Philosophy Blog.
  29.  16
    On social laws for artificial agent societies: off-line design.Yoav Shoham & Moshe Tennenholtz - 1995 - Artificial Intelligence 73 (1-2):231-252.
  30. (1 other version)Intention Reconsideration in Artificial Agents: a Structured Account.Fabrizio Cariani - forthcoming - Special Issue of Phil Studies.
    An important module in the Belief-Desire-Intention architecture for artificial agents (which builds on Michael Bratman's work in the philosophy of action) focuses on the task of intention reconsideration. The theoretical task is to formulate principles governing when an agent ought to undo a prior committed intention and reopen deliberation. Extant proposals for such a principle, if sufficiently detailed, are either too task-specific or too computationally demanding. I propose that an agent ought to reconsider an intention whenever some incompatible (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  31. (1 other version)The ethics of designing artificial agents.S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty - 2008 - Ethics and Information Technology 10 (2-3):112-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial (...)
     
    Export citation  
     
    Bookmark   1 citation  
  32. Beyond persons: extending the personal/subpersonal distinction to non-rational animals and artificial agents.Manuel de Pinedo-Garcia & Jason Noble - 2008 - Biology and Philosophy 23 (1):87-100.
    The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial Life that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  33.  25
    Biologically Inspired Emotional Expressions for Artificial Agents.Beáta Korcsok, Veronika Konok, György Persa, Tamás Faragó, Mihoko Niitsuma, Ádám Miklósi, Péter Korondi, Péter Baranyi & Márta Gácsi - 2018 - Frontiers in Psychology 9:388957.
    A special area of human-machine interaction, the expression of emotions gains importance with the continuous development of artificial agents such as social robots or interactive mobile applications. We developed a prototype version of an abstract emotion visualization agent to express five basic emotions and a neutral state. In contrast to well-known symbolic characters (e.g., smileys) these displays follow general biological and ethological rules. We conducted a multiple questionnaire study on the assessment of the displays with Hungarian and Japanese (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Privacy and artificial agents, or, is google reading my email?Samir Chopra & Laurence White - manuscript
    in Proceedings of the International Joint Conference on Artificial Intelligence, 2007.
    Direct download  
     
    Export citation  
     
    Bookmark  
  35.  43
    Adopting the intentional stance toward natural and artificial agents.Jairo Perez-Osorio & Agnieszka Wykowska - 2020 - Philosophical Psychology 33 (3):369-395.
    In our daily lives, we need to predict and understand others’ behavior in order to navigate through our social environment. Predictions concerning other humans’ behavior usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called ‘adoption of the intentional stance.’ In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture, and human-robot interaction. We propose that adopting the intentional stance might be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  36.  20
    Towards socially-competent and culturally-adaptive artificial agents.Chiara Bassetti, Enrico Blanzieri, Stefano Borgo & Sofia Marangon - 2022 - Interaction Studies 23 (3):469-512.
    The development of artificial agents for social interaction pushes to enrich robots with social skills and knowledge about (local) social norms. One possibility is to distinguish the expressive and the functional orders during a human-robot interaction. The overarching aim of this work is to set a framework to make the artificial agent socially-competent beyond dyadic interaction – interaction in varying multi-party social situations – and beyond individual-based user personalization, thereby enlarging the current conception of “culturally-adaptive”. The core (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  6
    Intention reconsideration in artificial agents: a structured account.Fabrizio Cariani - 2025 - Philosophical Studies 182 (1):205-228.
    An important module in the Belief-Desire-Intention architecture for artificial agents (which builds on Michael Bratman’s work in the philosophy of action) focuses on the task of intention reconsideration. The theoretical task is to formulate principles governing when an agent ought to undo a prior committed intention and reopen deliberation. Extant proposals for such a principle, if sufficiently detailed, are either too task-specific or too computationally demanding. I propose that an agent ought to reconsider an intention whenever some incompatible (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38.  51
    Learning to Manipulate and Categorize in Human and Artificial Agents.Giuseppe Morlino, Claudia Gianelli, Anna M. Borghi & Stefano Nolfi - 2015 - Cognitive Science 39 (1):39-64.
    This study investigates the acquisition of integrated object manipulation and categorization abilities through a series of experiments in which human adults and artificial agents were asked to learn to manipulate two-dimensional objects that varied in shape, color, weight, and color intensity. The analysis of the obtained results and the comparison of the behavior displayed by human and artificial agents allowed us to identify the key role played by features affecting the agent/environment interaction, the relation between category (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  39. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.Markus Https://Orcidorg Kneer - 2021 - Cognitive Science 45 (10):e13032.
    The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  40.  18
    Beyond persons: extending the personal/subpersonal distinction to non-rational animals and artificial agents.Manuel Pinedo-Garcia & Jason Noble - 2008 - Biology and Philosophy 23 (1):87-100.
    The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial Life that (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  41.  90
    What is it like to encounter an autonomous artificial agent?Karsten Weber - 2013 - AI and Society 28 (4):483-489.
    Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  42.  40
    Is it possible to grow an I–Thou relation with an artificial agent? A dialogistic perspective.Stefan Trausan-Matu - 2019 - AI and Society 34 (1):9-17.
    The paper analyzes if it is possible to grow an I–Thou relation in the sense of Martin Buber with an artificial, conversational agent developed with Natural Language Processing techniques. The requirements for such an agent, the possible approaches for the implementation, and their limitations are discussed. The relation of the achievement of this goal with the Turing test is emphasized. Novel perspectives on the I–Thou and I–It relations are introduced according to the sociocultural paradigm and Mikhail Bakhtin’s dialogism, polyphony (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  43.  40
    ConsScale: A pragmatic scale for measuring the level of consciousness in artificial agents.Raul Arrabales, Agapito Ledezma & Araceli Sanchis - 2010 - Journal of Consciousness Studies 17 (3-4):3-4.
    One of the key problems the field of Machine Consciousness is currently facing is the need to accurately assess the potential level of consciousness that an artificial agent might develop. This paper presents a novel artificial consciousness scale designed to provide a pragmatic and intuitive reference in the evaluation of MC implementations. The version of ConsScale described in this work provides a comprehensive evaluation mechanism which enables the estimation of the potential degree of consciousness of most of the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  44. Film. Mirrors of nature: artificial agents in real life and virtual worlds.Paul Dumouchel - 2015 - In Scott Cowdell, Chris Fleming & Joel Hodge (eds.), Mimesis, movies, and media. London: Bloomsbury Academic.
     
    Export citation  
     
    Bookmark  
  45.  47
    Demonstrating sensemaking emergence in artificial agents: A method and an example.Olivier L. Georgeon & James B. Marshall - 2013 - International Journal of Machine Consciousness 5 (2):131-144.
    We propose an experimental method to study the possible emergence of sensemaking in artificial agents. This method involves analyzing the agent's behavior in a test bed environment that presents regularities in the possibilities of interaction afforded to the agent, while the agent has no presuppositions about the underlying functioning of the environment that explains such regularities. We propose a particular environment that permits such an experiment, called the Small Loop Problem. We argue that the agent's behavior demonstrates sensemaking (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  56
    What makes full artificial agents morally different.Erez Firt - forthcoming - AI and Society:1-10.
    In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various aspects of full-blown AMAs (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Representation in natural and artificial agents.M. Bickhard - 1999 - In Edwina Taborsky (ed.), Semiosis, Evolution, Energy: Towards a Reconceptualization of the Sign. Shaker Verlag. pp. 15--26.
  48. (1 other version)Bio-Agency and the Possibility of Artificial Agents.Anne Sophie Meincke - 2018 - In David Hommen Alexander Christian & Alexander Christian (eds.), Philosophy of Science - Between the Natural Sciences, the Social Sciences, and the Humanities. Selected Papers from the 2016 conference of the German Society of Philosophy of Science. pp. 65-93.
    Within the philosophy of biology, recently promising steps have been made towards a biologically grounded concept of agency. Agency is described as bio-agency: the intrinsically normative adaptive behaviour of human and non-human organisms, arising from their biological autonomy. My paper assesses the bio-agency approach by examining criticism recently directed by its proponents against the project of embodied robotics. Defenders of the bio-agency approach have claimed that embodied robots do not, and for fundamental reasons cannot, qualify as artificial agents (...)
     
    Export citation  
     
    Bookmark   2 citations  
  49. Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  50. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
1 — 50 / 957