Results for 'emotional AI'

962 found
Order:
  1.  66
    Emotional AI, soft biometrics and the surveillance of emotional life: An unusual consensus on privacy.Andrew McStay - 2020 - Big Data and Society 7 (1).
    By the early 2020s, emotional artificial intelligence will become increasingly present in everyday objects and practices such as assistants, cars, games, mobile phones, wearables, toys, marketing, insurance, policing, education and border controls. There is also keen interest in using these technologies to regulate and optimize the emotional experiences of spaces, such as workplaces, hospitals, prisons, classrooms, travel infrastructures, restaurants, retail and chain stores. Developers frequently claim that their applications do not identify people. Taking the claim at face value, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  2.  33
    Emotional AI and the future of wellbeing in the post-pandemic workplace.Peter Mantello & Manh-Tung Ho - forthcoming - AI and Society:1-7.
    This paper interrogates the growing pervasiveness of affect recognition tools as an emerging layer human-centric automated management in the global workplace. While vendors tout the neoliberal incentives of emotion-recognition technology as a pre-eminent tool of workplace wellness, we argue that emotional AI recalibrates the horizons of capital not by expanding outward into the consumer realm (like surveillance capitalism). Rather, as a new genus of digital Taylorism, it turns inward, passing through the corporeal exterior to extract greater surplus value and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  39
    Emotional AI, Ethics, and Japanese Spice: Contributing Community, Wholeness, Sincerity, and Heart.Andrew McStay - 2021 - Philosophy and Technology 34 (4):1781-1802.
    This paper assesses leading Japanese philosophical thought since the onset of Japan’s modernity: namely, from the Meiji Restoration onwards. It argues that there are lessons of global value for AI ethics to be found from examining leading Japanese philosophers of modernity and ethics, each of whom engaged closely with Western philosophical traditions. Turning to these philosophers allows us to advance from what are broadly individualistically and Western-oriented ethical debates regarding emergent technologies that function in relation to AI, by introducing notions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  4.  49
    Music evokes vicarious emotions in listeners.Ai Kawakami, Kiyoshi Furukawa & Kazuo Okanoya - 2014 - Frontiers in Psychology 5.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  5.  53
    Bosses without a heart: socio-demographic and cross-cultural determinants of attitude toward Emotional AI in the workplace.Peter Mantello, Manh-Tung Ho, Minh-Hoang Nguyen & Quan-Hoang Vuong - 2023 - AI and Society 38 (1):97-119.
    Biometric technologies are becoming more pervasive in the workplace, augmenting managerial processes such as hiring, monitoring and terminating employees. Until recently, these devices consisted mainly of GPS tools that track location, software that scrutinizes browser activity and keyboard strokes, and heat/motion sensors that monitor workstation presence. Today, however, a new generation of biometric devices has emerged that can sense, read, monitor and evaluate the affective state of a worker. More popularly known by its commercial moniker, Emotional AI, the technology (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  6.  60
    Influence of trait empathy on the emotion evoked by sad music and on the preference for it.Ai Kawakami & Kenji Katahira - 2015 - Frontiers in Psychology 6.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  7.  65
    We have to talk about emotional AI and crime.Lena Podoletz - 2023 - AI and Society 38 (3):1067-1082.
    Emotional AI is an emerging technology used to make probabilistic predictions about the emotional states of people using data sources, such as facial (micro)-movements, body language, vocal tone or the choice of words. The performance of such systems is heavily debated and so are the underlying scientific methods that serve as the basis for many such technologies. In this article I will engage with this new technology, and with the debates and literature that surround it. Working at the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  8.  33
    Correction to: Emotional AI and the future of wellbeing in the post-pandemic workplace.Peter Mantello & Manh-Tung Ho - forthcoming - AI and Society:1-1.
  9. Dreary useless centuries of happiness: Cordwainer Smith’s “Under Old Earth” as an ethical critique of our current Emotion AI goals.Alba Curry - 2022 - Neohelicon 49:465–476.
    This paper explores the ways in which Cordwainer Smith’s short story “Under Old Earth” problematizes emotions, who/what has them, and who/what is granted moral status. Most importantly, however, “Under Old Earth” questions the primacy of happiness in human society, especially where happiness is understood as the absence of other (negative) emotions. As such, “Under Old Earth” challenges the notion, widely held in contemporary ethics, that our moral obligation to one another is mediated through the goal of the attainment of happiness. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  82
    Possibilities and ethical issues of entrusting nursing tasks to robots and artificial intelligence.Tomohide Ibuki, Ai Ibuki & Eisuke Nakazawa - 2024 - Nursing Ethics 31 (6):1010-1020.
    In recent years, research in robotics and artificial intelligence (AI) has made rapid progress. It is expected that robots and AI will play a part in the field of nursing and their role might broaden in the future. However, there are areas of nursing practice that cannot or should not be entrusted to robots and AI, because nursing is a highly humane practice, and therefore, there would, perhaps, be some practices that should not be replicated by robots or AI. Therefore, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  67
    What is a Turing test for emotional AI?Manh-Tung Ho - forthcoming - AI and Society:1-2.
  12.  58
    Why we need to be weary of emotional AI.Mantello Peter & Manh-Tung Ho - forthcoming - AI and Society:1-3.
  13.  65
    Modeling the Post-9/11 Meaning-Laden Paradox: From Deep Connection and Deep Struggle to Posttraumatic Stress and Growth.Bu Huang*, Amy L. Ai*, Terrence N. Tice** & Catherine M. Lemieux - 2011 - Archive for the Psychology of Religion 33 (2):173-204.
    The prospective study follows college students after the 9/11 attacks. Based on evidence and trauma-related theories, and guided by reports on positive and negative reactions and meaning-related actions among Americans after 9/11, we explored the seemingly contradictory, yet meaning-related pathways to posttraumatic growth and posttraumatic stress disorder symptoms , indicating the sense of deep interconnectedness and deep conflict. The final model showed that 9/11 emotional turmoil triggered processes of assimilation, as indicated in pathways between prayer coping and perceived spiritual (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14.  61
    Personal AI, deception, and the problem of emotional bubbles.Philip Maxwell Thingbø Mlonyeni - forthcoming - AI and Society:1-12.
    Personal AI is a new type of AI companion, distinct from the prevailing forms of AI companionship. Instead of playing a narrow and well-defined social role, like friend, lover, caretaker, or colleague, with a set of pre-determined responses and behaviors, Personal AI is engineered to tailor itself to the user, including learning to mirror the user’s unique emotional language and attitudes. This paper identifies two issues with Personal AI. First, like other AI companions, it is deceptive about the presence (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15.  99
    Cognitive, emotive, and ethical aspects of decision making in humans and in AI.Iva Smit, Wendell Wallach & G. E. Lasker (eds.) - 2005 - Windsor, Ont.: International Institute for Advanced Studies in Systems Research and Cybernetics.
  16. Reasons to Respond to AI Emotional Expressions.Rodrigo Díaz & Jonas Blatter - 2025 - American Philosophical Quarterly 62 (1):87-102.
    Human emotional expressions can communicate the emotional state of the expresser, but they can also communicate appeals to perceivers. For example, sadness expressions such as crying request perceivers to aid and support, and anger expressions such as shouting urge perceivers to back off. Some contemporary artificial intelligence (AI) systems can mimic human emotional expressions in a (more or less) realistic way, and they are progressively being integrated into our daily lives. How should we respond to them? Do (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17.  40
    The Emotional Risk Posed by AI (Artificial Intelligence) in the Workplace.Maria Danielsen - 2023 - Norsk Filosofisk Tidsskrift 58 (2-3):106-117.
    The existential risk posed by ubiquitous artificial intelligence (AI) is a subject of frequent discussion with descriptions of the prospect of misuse, the fear of mass destruction, and the singularity. In this paper I address an under-explored category of existential risk posed by AI, namely emotional risk. Values are a main source of emotions. By challenging some of our most essential values, AI systems are therefore likely to expose us to emotional risks such as loss of care and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18.  13
    AI and emotions: enhancing green intentions through personalized recommendations—a mediated moderation analysis.Nitika Sharma & Arminda Paço - forthcoming - AI and Society:1-18.
    This study investigates the relationship between the usage of artificial intelligence (AI), specifically AI-based personalized recommendations (AI-PR), and its impact on consumers’ green product awareness (GPA) during online shopping. Recognizing that AI’s influence alone may not significantly affect green buying behaviours, we introduce emotional intelligence (EI) as a moderator to better understand human impact when using AI. The research framework is built upon the Interactive Recommendation Agents and the SOBC framework. A self-administered survey was conducted with 403 respondents, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19.  47
    Ai emotions: Will one know them when one sees them.Zippora Arzi-Gonczarowski - 2002 - In Robert Trappl, Cybernetics and Systems. Austrian Society for Cybernetics Studies. pp. 2--739.
    Direct download  
     
    Export citation  
     
    Bookmark  
  20.  41
    Self‐Deception in Human– AI Emotional Relations.Emilia Kaczmarek - forthcoming - Journal of Applied Philosophy.
    Imagine a man chatting with his AI girlfriend app. He looks at his smartphone and says, ‘Finally, I'm being understood’. Is he deceiving himself? Is there anything morally wrong with it? The human tendency to anthropomorphize AI is well established, and the popularity of AI companions is growing. This article answers three questions: (1) How can being charmed by AI's simulated emotions be considered self‐deception? (2) Why might we have an obligation to avoid harmless self‐deception? (3) When is self‐deception in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. AI systems must not confuse users about their sentience or moral status.Eric Schwitzgebel - 2023 - Patterns 4.
    One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  22. In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   68 citations  
  23.  28
    Empathy and AI: cognitive empathy or emotional (affective) empathy?Satinder P. Gill - 2024 - AI and Society 39 (6):2641-2642.
  24.  11
    Qing dong yu zhong: sheng si ai yu de zhe xue si kao = Butterflies in the stomach: a philosophical investigation of human emotions.Mu'en Huang - 2019 - Xianggang: Zhong wen da xue chu ban she.
    Direct download  
     
    Export citation  
     
    Bookmark  
  25.  58
    Emotional artificial intelligence in children’s toys and devices: Ethics, governance and practical remedies.Gilad Rosner & Andrew McStay - 2021 - Big Data and Society 8 (1).
    This article examines the social acceptability and governance of emotional artificial intelligence in children’s toys and other child-oriented devices. To explore this, it conducts interviews with stakeholders with a professional interest in emotional AI, toys, children and policy to consider implications of the usage of emotional AI in children’s toys and services. It also conducts a demographically representative UK national survey to ascertain parental perspectives on networked toys that utilise data about emotions. The article highlights disquiet about (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  26.  73
    Role of emotions in responsible military AI.José Kerstholt, Mark Neerincx, Karel van den Bosch, Jason S. Metcalfe & Jurriaan van Diggelen - 2023 - Ethics and Information Technology 25 (1):1-4.
  27.  2
    Theological and Philosophical Perspectives on Emotional Design in Ai and Virtual Reality Learning: Exploring Spiritual Formation and Religious Education.Hanlu Yu & Lanyu Tian - 2025 - European Journal for Philosophy of Religion 17 (2):33-56.
    The increasing integration of artificial intelligence and virtual reality in online learning platforms has raised profound philosophical and theological questions regarding human cognition, emotional engagement, and spiritual formation. However, current virtual learning environments often overlook the significant role of emotional and affective factors in shaping user experience, particularly in the context of religious education and moral development. This study explores an emotionally designed virtual interactive learning platform, incorporating a philosophical inquiry into how technology-mediated learning affects human emotions, ethical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. Emotion, Cognition and Artificial Intelligence.Jason Megill - 2014 - Minds and Machines 24 (2):189-199.
    Some have claimed that since machines lack emotional “qualia”, or conscious experiences of emotion, machine intelligence will fall short of human intelligence. I examine this objection, ultimately finding it unpersuasive. I first discuss recent work on emotion that suggests that emotion plays various roles in cognition. I then raise the following question: are phenomenal experiences of emotion an essential or necessary component of the performance of these cognitive abilities? I then sharpen the question by distinguishing between four possible positions (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Emotion Analysis in NLP: Trends, Gaps and Roadmap for Future Directions.Flor Miriam Plaza-del-Arco, Alba Curry & Amanda Cercas Curry - forthcoming - Arxiv.
    Emotions are a central aspect of communication. Consequently, emotion analysis (EA) is a rapidly growing field in natural language processing (NLP). However, there is no consensus on scope, direction, or methods. In this paper, we conduct a thorough review of 154 relevant NLP publications from the last decade. Based on this review, we address four different questions: (1) How are EA tasks defined in NLP? (2) What are the most prominent emotion frameworks and which emotions are modeled? (3) Is the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Excavating AI: the politics of images in machine learning training sets.Kate Crawford & Trevor Paglen - forthcoming - AI and Society:1-12.
    By looking at the politics of classification within machine learning systems, this article demonstrates why the automated interpretation of images is an inherently social and political project. We begin by asking what work images do in computer vision systems, and what is meant by the claim that computers can “recognize” an image? Next, we look at the method for introducing images into computer systems and look at how taxonomies order the foundational concepts that will determine how a system interprets the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  31.  3
    Is Generative AI Increasing the Risk for Technology‐Mediated Trauma Among Vulnerable Populations?Abdul-Fatawu Abdulai - 2025 - Nursing Inquiry 32 (1):e12686.
    The proliferation of Generative Artificial Intelligence (Generative AI) has led to an increased reliance on AI‐generated content for designing and deploying digital health interventions. While generative AI has the potential to facilitate and automate healthcare, there are concerns that AI‐generated content and AI‐generated health advice could trigger, perpetuate, or exacerbate prior traumatic experiences among vulnerable populations. In this discussion article, I examined how generative‐AI‐powered digital health interventions could trigger, perpetuate, or exacerbate emotional trauma among vulnerable populations who rely on (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. When AI meets PC: exploring the implications of workplace social robots and a human-robot psychological contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  33.  66
    Emotionalized AI and the meaningfulness gap: an AI ethics perspective.Masoud Toossi Saeidi - forthcoming - AI and Society:1-11.
    This paper demonstrates that with increased philosophical scrutiny regarding the absurdity of life and meaningful living, the development of “Emotionalized AI” falls under ethical considerations. In this context, contemporary philosophical discussions on the meaning of life—as articulated by thinkers like Thomas Nagel, Joshua Seachris, Thaddeus Metz, and Susan Wolf—intersect with recent years’ reviews of AI ethics, particularly those related to meaningful relationships. The main analysis is conducted by explaining a philosophical perspective on meaningful life and revisiting AI ethics reviews based (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34.  49
    The Ethics of Emotional Artificial Intelligence: A Mixed Method Analysis.Nader Ghotbi - 2023 - Asian Bioethics Review 15 (4):417-430.
    Emotions play a significant role in human relations, decision-making, and the motivation to act on those decisions. There are ongoing attempts to use artificial intelligence (AI) to read human emotions, and to predict human behavior or actions that may follow those emotions. However, a person’s emotions cannot be easily identified, measured, and evaluated by others, including automated machines and algorithms run by AI. The ethics of emotional AI is under research and this study has examined the emotional variables (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. AI Can Help Us Live More Deliberately.Julian Friedland - 2019 - MIT Sloan Management Review 60 (4).
    Our rapidly increasing reliance on frictionless AI interactions may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse. Many trends already well underway involve the offloading of cognitive, emotional, and ethical labor to AI software in myriad social, civil, personal, and professional contexts. Gradually, we may lose the inclination and capacity to engage in critically reflective thought, making us more cognitively and emotionally vulnerable and thus more anxious and prone (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36.  19
    The AI-mediated intimacy economy: a paradigm shift in digital interactions.Ayşe Aslı Bozdağ - forthcoming - AI and Society:1-22.
    This article critically examines the paradigm shift from the attention economy to the intimacy economy—a market system where personal and emotional data are exchanged for customized experiences that cater to individual emotional and psychological needs. It explores how AI transforms these personal and emotional inputs into services, thereby raising essential questions about the authenticity of digital interactions and the potential commodification of intimate experiences. The study delineates the roles of human–computer interaction and AI in deepening personal connections, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37.  7
    “All AIs are Psychopaths”? The Scope and Impact of a Popular Analogy.Elina Nerantzi - 2025 - Philosophy and Technology 38 (1):1-24.
    Artificial Intelligence (AI) Agents are often compared to psychopaths in popular news articles. The headlines are ‘eye-catching’, but the questions of what this analogy means or why it matters are hardly answered. The aim of this paper is to take this popular analogy ‘seriously’. By that, I mean two things. First, I aim to explore the scope of this analogy, i.e. to identify and analyse the shared properties of AI agents and psychopaths, namely, their lack of moral emotions and their (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Computer says "No": The Case Against Empathetic Conversational AI.Alba Curry & Amanda Cercas Curry - 2023 - Findings of the Association for Computational Linguistics: Acl 2023.
    Emotions are an integral part of human cognition and they guide not only our understanding of the world but also our actions within it. As such, whether we soothe or flame an emotion is not inconsequential. Recent work in conversational AI has focused on responding empathetically to users, validating and soothing their emotions without a real basis. This AI-aided emotional regulation can have negative consequences for users and society, tending towards a one-noted happiness defined as only the absence of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. (1 other version)Why Emotions Do Not Solve the Frame Problem.Madeleine Ransom - 2016 - In Vincent C. Müller, Fundamental Issues of Artificial Intelligence. Cham: Springer.
    Attempts to engineer a generally intelligent artificial agent have yet to meet with success, largely due to the (intercontext) frame problem. Given that humans are able to solve this problem on a daily basis, one strategy for making progress in AI is to look for disanalogies between humans and computers that might account for the difference. It has become popular to appeal to the emotions as the means by which the frame problem is solved in human agents. The purpose of (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  40.  66
    Second-Person Authenticity and the Mediating Role of AI: A Moral Challenge for Human-to-Human Relationships?Davide Battisti - 2025 - Philosophy and Technology 38 (1):1-19.
    The development of AI tools, such as large language models and speech emotion and facial expression recognition systems, has raised new ethical concerns about AI’s impact on human relationships. While much of the debate has focused on human-AI relationships, less attention has been devoted to another class of ethical issues, which arise when AI mediates human-to-human relationships. This paper opens the debate on these issues by analyzing the case of romantic relationships, particularly those in which one partner uses AI tools, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41.  10
    Can we Expect AI to be Wise? — A Wisdom, Knowledge (Management), Resonance, and Cognitive Science Perspective.Markus Peschl, Ernst Wageneder, Alexander Kaiser & Clemens Kerschbaum - unknown
    This paper investigates whether AI can possess wisdom, a complex and deeply human capacity. We adopt an interdisciplinary approach, drawing on philosophy, 4E cognition, Material Engagement Theory, and engaged epistemology, to argue that wisdom is a dynamic force unfolding through meaningful life experiences and resonant interactions with the world. Central to our discussion is Rosa's concept of resonance, essential for fostering personal growth, emotional empathy, as well as existential and bodily connectedness to the world and its unfolding into an (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  42.  53
    Is Your Computer Lying? AI and Deception.Noreen Herzfeld - 2023 - Sophia 62 (4):665-678.
    Recent developments in AI, especially the spectacular success of Large Language models, have instigated renewed questioning of what remains distinctively human. As AI stands poised to take over more and more human tasks, what is left that distinguishes humans? One way we might identify a humanlike intelligence would be when we detect it telling lies. Yet AIs lack both the intention and the motivation to truly tell lies, instead producing merely bullshit. With neither emotions, embodiment, nor the social awareness that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice.Hannah Bleher & Matthias Braun - 2023 - Science and Engineering Ethics 29 (3):1-21.
    Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory–practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  44.  63
    Embodied AI, Creation, and Cog.Anne Foerst - 1998 - Zygon 33 (3):455-461.
    This is a reply to comments on my paper Cog, a Humanoid Robot, and the Questions of the Image of God; one was written by Mary Gerhart and Allan Melvin Russell, and another one by Helmut Reich. I will start with the suggested analogy of the relationship between God and us and the one between us and the humanoid robot Cog and will show why this analogy is not helpful for the dialogue between theology and artificial intelligence (AI). Such a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  45.  55
    On The Adequacy of Emotions and Existential Feelings.Stephan Achim - 2017 - Rivista Internazionale di Filosofia e Psicologia 8 (1):1-13.
    : In the analytic tradition of the philosophy of emotions the folk notion of adequacy is understood with regard to – at least four – different questions, viz. a moral question, a prudential question, an epistemic question, and a fittingness question. Usually, the fittingness question is treated as being the central one. I have some doubts concerning this assessment, particularly when it comes to substantial – interpersonal or cultural – controversies about whether a specific emotional response is adequate or (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  63
    Emotions in Relation. Epistemological and Ethical Scaffolding for Mixed Human-Robot Social Ecologies.Luisa Damiano & Paul Gerard Dumouchel - 2020 - Humana Mente 13 (37).
    In this article we tackle the core question of machine emotion research – “Can machines have emotions?” – in the context of “social robots”, a new class of machines designed to function as “social partners” for humans. Our aim, however, is not to provide an answer to the question “Can robots have emotions?” Rather we argue that the “robotics of emotion” moves us to reformulate it into a different one – “Can robots affectively coordinate with humans?” Developing a series of (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  47.  51
    Is There a Domain of Linguistic Competence ai Cannot Grasp?Marius Mumbeck - 2024 - Grazer Philosophische Studien 101 (2):216-230.
    Linguistic competence is, among other things, the cognitive ability of using words appropriately. Following this notion, being a competent language user cannot be merely simulated because pretending to use words appropriately already is using words appropriately. Since current AI already can use words appropriately it may be that AI already has linguistic competence instead of merely simulating it. However, in this article I argue that there is at least one domain of linguistic competence that can be simulated: grasping lexical effects. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48.  20
    Fear of AI: an inquiry into the adoption of autonomous cars in spite of fear, and a theoretical framework for the study of artificial intelligence technology acceptance.Federico Cugurullo & Ransford A. Acheampong - forthcoming - AI and Society:1-16.
    Artificial intelligence (AI) is becoming part of the everyday. During this transition, people’s intention to use AI technologies is still unclear and emotions such as fear are influencing it. In this paper, we focus on autonomous cars to first verify empirically the extent to which people fear AI and then examine the impact that fear has on their intention to use AI-driven vehicles. Our research is based on a systematic survey and it reveals that while individuals are largely afraid of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  49.  94
    Can an AI-carebot be filial? Reflections from Confucian ethics.Kathryn Muyskens, Yonghui Ma & Michael Dunn - 2024 - Nursing Ethics 31 (6):999-1009.
    This article discusses the application of artificially intelligent robots within eldercare and explores a series of ethical considerations, including the challenges that AI (Artificial Intelligence) technology poses to traditional Chinese Confucian filial piety. From the perspective of Confucian ethics, the paper argues that robots cannot adequately fulfill duties of care. Due to their detachment from personal relationships and interactions, the “emotions” of AI robots are merely performative reactions in different situations, rather than actual emotional abilities. No matter how “humanized” (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  8
    “Empathy Code”: The Dangers of Automating Empathy in Business.Nicola Thomas & Niall Docherty - forthcoming - Business and Society.
    Organizations are increasingly adopting “Emotional AI” to monitor and influence employee emotions, aiming to create more empathetic workplaces. However, we argue that automating empathy risks fostering empathy skepticism, alienating employees, exacerbating mental health issues, and eroding trust. We call on organizations to address the root causes of negative workplace emotions and leverage AI as a tool to complement—rather than replace—empathy, fostering workplaces that genuinely prioritize care and trust.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 962