Results for ' AI value alignment'

982 found
Order:
  1.  13
    Democratizing value alignment: from authoritarian to democratic AI ethics.Linus Ta-Lun Huang, Gleb Papyshev & James K. Wong - 2024 - AI and Ethics.
    Value alignment is essential for ensuring that AI systems act in ways that are consistent with human values. Existing approaches, such as reinforcement learning with human feedback and constitutional AI, however, exhibit power asymmetries and lack transparency. These “authoritarian” approaches fail to adequately accommodate a broad array of human opinions, raising concerns about whose values are being prioritized. In response, we introduce the Dynamic Value Alignment approach, theoretically grounded in the principles of parallel constraint satisfaction, which (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2. (1 other version)An Enactive Approach to Value Alignment in Artificial Intelligence: A Matter of Relevance.Michael Cannon - 2021 - In Vincent C. Müller (ed.), Philosophy and Theory of AI. Springer Cham. pp. 119-135.
    The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. Existing approaches appear to conceive of the problem as "how do we ensure that AI solves the problem in the right way", in order to avoid the possibility of AI turning humans into paperclips in order to “make more paperclips” or eradicating the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  3.  60
    Honor Ethics: The Challenge of Globalizing Value Alignment in AI.Stephen Tze-Inn Wu, Dan Demetriou & Rudwan Ali Husain - 2023 - 2023 Acm Conference on Fairness, Accountability, and Transparency (Facct '23), June 12-15, 2023.
    Some researchers have recognized that privileged communities dominate the discourse on AI Ethics, and other voices need to be heard. As such, we identify the current ethics milieu as arising from WEIRD (Western, Educated, Industrialized, Rich, Democratic) contexts, and aim to expand the discussion to non-WEIRD global communities, who are also stakeholders in global sociotechnical systems. We argue that accounting for honor, along with its values and related concepts, would better approximate a global ethical perspective. This complex concept already underlies (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  4.  70
    Value Alignment for Advanced Artificial Judicial Intelligence.Christoph Winter, Nicholas Hollman & David Manheim - 2023 - American Philosophical Quarterly 60 (2):187-203.
    This paper considers challenges resulting from the use of advanced artificial judicial intelligence (AAJI). We argue that these challenges should be considered through the lens of value alignment. Instead of discussing why specific goals and values, such as fairness and nondiscrimination, ought to be implemented, we consider the question of how AAJI can be aligned with goals and values more generally, in order to be reliably integrated into legal and judicial systems. This value alignment framing draws (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Variable Value Alignment by Design; averting risks with robot religion.Jeffrey White - forthcoming - Embodied Intelligence 2023.
    Abstract: One approach to alignment with human values in AI and robotics is to engineer artiTicial systems isomorphic with human beings. The idea is that robots so designed may autonomously align with human values through similar developmental processes, to realize project ideal conditions through iterative interaction with social and object environments just as humans do, such as are expressed in narratives and life stories. One persistent problem with human value orientation is that different human beings champion different values (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6. Value alignment, human enhancement, and moral revolutions.Ariela Tubert & Justin Tiehen - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Human beings are internally inconsistent in various ways. One way to develop this thought involves using the language of value alignment: the values we hold are not always aligned with our behavior, and are not always aligned with each other. Because of this self-misalignment, there is room for potential projects of human enhancement that involve achieving a greater degree of value alignment than we presently have. Relatedly, discussions of AI ethics sometimes focus on what is known (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. The linguistic dead zone of value-aligned agency, natural and artificial.Travis LaCroix - 2024 - Philosophical Studies:1-23.
    The value alignment problem for artificial intelligence (AI) asks how we can ensure that the “values”—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, those programmes that seek to design (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  8.  33
    A comment on the pursuit to align AI: we do not need value-aligned AI, we need AI that is risk-averse.Rebecca Raper - forthcoming - AI and Society:1-3.
  9.  28
    Aesthetic Value and the AI Alignment Problem.Alice C. Helliwell - 2024 - Philosophy and Technology 37 (4):1-21.
    The threat from possible future superintelligent AI has given rise to discussion of the so-called “value alignment problem”. This is the problem of how to ensure artificially intelligent systems align with human values, and thus (hopefully) mitigate risks associated with them. Naturally, AI value alignment is often discussed in relation to morally relevant values, such as the value of human lives or human wellbeing. However, solutions to the value alignment problem target all human (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  46
    Instilling moral value alignment by means of multi-objective reinforcement learning.Juan Antonio Rodriguez-Aguilar, Maite Lopez-Sanchez, Marc Serramia & Manel Rodriguez-Soto - 2022 - Ethics and Information Technology 24 (1).
    AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. Here, we propose a novel way of tackling the value alignment problem as a two-step process. The first step consists on formalising moral values and value aligned behaviour based on philosophical foundations. Our formalisation is compatible with the framework of (Multi-Objective) Reinforcement Learning, to ease the handling of an agent’s individual and ethical objectives. The second step (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. Values in science and AI alignment research.Leonard Dung - manuscript
    Roughly, empirical AI alignment research (AIA) is an area of AI research which investigates empirically how to design AI systems in line with human goals. This paper examines the role of non-epistemic values in AIA. It argues that: (1) Sciences differ in the degree to which values influence them. (2) AIA is strongly value-laden. (3) This influence of values is managed inappropriately and thus threatens AIA’s epistemic integrity and ethical beneficence. (4) AIA should strive to achieve value (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12.  28
    Instilling moral value alignment by means of multi-objective reinforcement learning.M. Rodriguez-Soto, M. Serramia, M. Lopez-Sanchez & J. Antonio Rodriguez-Aguilar - 2022 - Ethics and Information Technology 24 (9).
    AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. Here, we propose a novel way of tackling the value alignment problem as a two-step process. The first step consists on formalising moral values and value aligned behaviour based on philosophical foundations. Our formalisation is compatible with the framework of (Multi-Objective) Reinforcement Learning, to ease the handling of an agent’s individual and ethical objectives. The second step (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13.  78
    Can the predictive processing model of the mind ameliorate the value-alignment problem?William Ratoff - 2021 - Ethics and Information Technology 23 (4):739-750.
    How do we ensure that future generally intelligent AI share our values? This is the value-alignment problem. It is a weighty matter. After all, if AI are neutral with respect to our wellbeing, or worse, actively hostile toward us, then they pose an existential threat to humanity. Some philosophers have argued that one important way in which we can mitigate this threat is to develop only AI that shares our values or that has values that ‘align with’ ours. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  96
    Aligning artificial intelligence with human values: reflections from a phenomenological perspective.Shengnan Han, Eugene Kelly, Shahrokh Nikou & Eric-Oluf Svee - 2022 - AI and Society 37 (4):1383-1395.
    Artificial Intelligence (AI) must be directed at humane ends. The development of AI has produced great uncertainties of ensuring AI alignment with human values (AI value alignment) through AI operations from design to use. For the purposes of addressing this problem, we adopt the phenomenological theories of material values and technological mediation to be that beginning step. In this paper, we first discuss the AI value alignment from the relevant AI studies. Second, we briefly present (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  15.  91
    Challenges of Aligning Artificial Intelligence with Human Values.Margit Sutrop - 2020 - Acta Baltica Historiae Et Philosophiae Scientiarum 8 (2):54-72.
    As artificial intelligence systems are becoming increasingly autonomous and will soon be able to make decisions on their own about what to do, AI researchers have started to talk about the need to align AI with human values. The AI ‘value alignment problem’ faces two kinds of challenges—a technical and a normative one—which are interrelated. The technical challenge deals with the question of how to encode human values in artificial intelligence. The normative challenge is associated with two questions: (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  16.  21
    Toleration and Justice in the Laozi: Engaging with Tao Jiang's Origins of Moral-Political Philosophy in Early China.Ai Yuan - 2023 - Philosophy East and West 73 (2):466-475.
    In lieu of an abstract, here is a brief excerpt of the content:Toleration and Justice in the Laozi:Engaging with Tao Jiang's Origins of Moral-Political Philosophy in Early ChinaAi Yuan (bio)IntroductionThis review article engages with Tao Jiang's ground-breaking monograph on the Origins of Moral-Political Philosophy in Early China with particular focus on the articulation of toleration and justice in the Laozi (otherwise called the Daodejing).1 Jiang discusses a naturalistic turn and the re-alignment of values in the Laozi, resulting in a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
    This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   61 citations  
  18. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  19. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  27
    Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it.Alice Liefgreen, Netta Weinstein, Sandra Wachter & Brent Mittelstadt - 2024 - AI and Society 39 (5):2183-2199.
    Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  55
    Calibrating machine behavior: a challenge for AI alignment.Erez Firt - 2023 - Ethics and Information Technology 25 (3):1-8.
    When discussing AI alignment, we usually refer to the problem of teaching or training advanced autonomous AI systems to make decisions that are aligned with human values or preferences. Proponents of this approach believe it can be employed as means to stay in control over sophisticated intelligent systems, thus avoiding certain existential risks. We identify three general obstacles on the path to implementation of value alignment: a technological/technical obstacle, a normative obstacle, and a calibration problem. Presupposing, for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Existentialist risk and value misalignment.Ariela Tubert & Justin Tiehen - forthcoming - Philosophical Studies.
    We argue that two long-term goals of AI research stand in tension with one another. The first involves creating AI that is safe, where this is understood as solving the problem of value alignment. The second involves creating artificial general intelligence, meaning AI that operates at or beyond human capacity across all or many intellectual domains. Our argument focuses on the human capacity to make what we call “existential choices”, choices that transform who we are as persons, including (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  17
    Beyond Preferences in AI Alignment.Tan Zhi-Xuan, Micah Carroll, Matija Franklin & Hal Ashton - forthcoming - Philosophical Studies:1-51.
    The dominant practice of AI alignment assumes (1) that preferences are an adequate representation of human values, (2) that human rationality can be understood in terms of maximizing the satisfaction of preferences, and (3) that AI systems should be aligned with the preferences of one or more humans to ensure that they behave safely and in accordance with our values. Whether implicitly followed or explicitly endorsed, these commitments constitute what we term apreferentistapproach to AI alignment. In this paper, (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24. ‘Interpretability’ and ‘Alignment’ are Fool’s Errands: A Proof that Controlling Misaligned Large Language Models is the Best Anyone Can Hope For.Marcus Arvan - forthcoming - AI and Society.
    This paper uses famous problems from philosophy of science and philosophical psychology—underdetermination of theory by evidence, Nelson Goodman’s new riddle of induction, theory-ladenness of observation, and “Kripkenstein’s” rule-following paradox—to show that it is empirically impossible to reliably interpret which functions a large language model (LLM) AI has learned, and thus, that reliably aligning LLM behavior with human values is provably impossible. Sections 2 and 3 show that because of how complex LLMs are, researchers must interpret their learned functions largely in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. Aligning artificial intelligence with moral intuitions: an intuitionist approach to the alignment problem.Dario Cecchini, Michael Pflanzer & Veljko Dubljevic - 2024 - AI and Ethics:1-11.
    As artificial intelligence (AI) continues to advance, one key challenge is ensuring that AI aligns with certain values. However, in the current diverse and democratic society, reaching a normative consensus is complex. This paper delves into the methodological aspect of how AI ethicists can effectively determine which values AI should uphold. After reviewing the most influential methodologies, we detail an intuitionist research agenda that offers guidelines for aligning AI applications with a limited set of reliable moral intuitions, each underlying a (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  80
    Friendly AI.Barbro Fröding & Martin Peterson - 2020 - Ethics and Information Technology 23 (3):207-214.
    In this paper we discuss what we believe to be one of the most important features of near-future AIs, namely their capacity to behave in a friendly manner to humans. Our analysis of what it means for an AI to behave in a friendly manner does not presuppose that proper friendships between humans and AI systems could exist. That would require reciprocity, which is beyond the reach of near-future AI systems. Rather, we defend the claim that social AIs should be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  27.  26
    Toward children-centric AI: a case for a growth model in children-AI interactions.Karolina La Fors - 2024 - AI and Society 39 (3):1303-1315.
    This article advocates for a hermeneutic model for children-AI (age group 7–11 years) interactions in which the desirable purpose of children’s interaction with artificial intelligence (AI) systems is children's growth. The article perceives AI systems with machine-learning components as having a recursive element when interacting with children. They can learn from an encounter with children and incorporate data from interaction, not only from prior programming. Given the purpose of growth and this recursive element of AI, the article argues for distinguishing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28. Is Alignment Unsafe?Cameron Domenico Kirk-Giannini - 2024 - Philosophy and Technology 37 (110):1–4.
    Inchul Yum (2024) argues that the widespread adoption of language agent architectures would likely increase the risk posed by AI by simplifying the process of aligning artificial systems with human values and thereby making it easier for malicious actors to use them to cause a variety of harms. Yum takes this to be an example of a broader phenomenon: progress on the alignment problem is likely to be net safety-negative because it makes artificial systems easier for malicious actors to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  29.  33
    Automation, Alignment, and the Cooperative Interface.Julian David Jonker - 2024 - The Journal of Ethics 28 (3):483-504.
    The paper demonstrates that social alignment is distinct from value alignment as it is currently understood in the AI safety literature, and argues that social alignment is an important research agenda. Work provides an important example for the argument, since work is a cooperative endeavor, and it is part of the larger manifold of social cooperation. These cooperative aspects of work are individually and socially valuable, and so they must be given a central place when evaluating (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  30. The Role of Engineers in Harmonising Human Values for AI Systems Design.Steven Umbrello - 2022 - Journal of Responsible Technology 10 (July):100031.
    Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  31.  5
    Fiction writing workshops to explore staff perceptions of artificial intelligence (AI) in higher education.Neil Dixon & Andrew Cox - forthcoming - AI and Society:1-16.
    This study explores perceptions of artificial intelligence (AI) in the higher education workplace through innovative use of fiction writing workshops. Twenty-three participants took part in three workshops, imagining the application of AI assistants and chatbots to their roles. Key themes were identified, including perceived benefits and challenges of AI implementation, interface design implications, and factors influencing task delegation to AI. Participants envisioned AI primarily as a tool to enhance task efficiency rather than fundamentally transform job roles. This research contributes insights (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32. Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence.Shakir Mohamed, Marie-Therese Png & William Isaac - 2020 - Philosophy and Technology 33 (4):659-684.
    This paper explores the important role of critical science, and in particular of post-colonial and decolonial theories, in understanding and shaping the ongoing advances in artificial intelligence. Artificial intelligence is viewed as amongst the technological advances that will reshape modern societies and their relations. While the design and deployment of systems that continually adapt holds the promise of far-reaching positive change, they simultaneously pose significant risks, especially to already vulnerable peoples. Values and power are central to this discussion. Decolonial theories (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  33.  26
    Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.Salla Westerstrand - 2024 - Science and Engineering Ethics 30 (5):1-21.
    The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34.  48
    Where lies the grail? AI, common sense, and human practical intelligence.William Hasselberger & Micah Lott - forthcoming - Phenomenology and the Cognitive Sciences:1-22.
    The creation of machines with intelligence comparable to human beings—so-called "human-level” and “general” intelligence—is often regarded as the Holy Grail of Artificial Intelligence (AI) research. However, many prominent discussions of AI lean heavily on the notion of human-level intelligence to frame AI research, but then rely on conceptions of human cognitive capacities, including “common sense,” that are sketchy, one-sided, philosophically loaded, and highly contestable. Our goal in this essay is to bring into view some underappreciated features of the practical intelligence (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development.Kristina Sekrst, Jeremy McHugh & Jonathan Rodriguez Cefalu - manuscript
    This paper explores the development of an ethical guardrail framework for AI systems, emphasizing the importance of customizable guardrails that align with diverse user values and underlying ethics. We address the challenges of AI ethics by proposing a structure that integrates rules, policies, and AI assistants to ensure responsible AI behavior, while comparing the proposed framework to the existing state-of-the-art guardrails. By focusing on practical mechanisms for implementing ethical standards, we aim to enhance transparency, user autonomy, and continuous improvement in (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  37. Systematizing AI Governance through the Lens of Ken Wilber's Integral Theory.Ammar Younas & Yi Zeng - manuscript
    We apply Ken Wilber's Integral Theory to AI governance, demonstrating its ability to systematize diverse approaches in the current multifaceted AI governance landscape. By analyzing ethical considerations, technological standards, cultural narratives, and regulatory frameworks through Integral Theory's four quadrants, we offer a comprehensive perspective on governance needs. This approach aligns AI governance with human values, psychological well-being, cultural norms, and robust regulatory standards. Integral Theory’s emphasis on interconnected individual and collective experiences addresses the deeper aspects of AI-related issues. Additionally, we (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  42
    Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.Stephanie Sheir, Arianna Manzini, Helen Smith & Jonathan Ives - forthcoming - AI and Society:1-14.
    Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology’s trustworthiness, a developer’s trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for ‘the good’, illustrating the moral heights to which technologies and developers ought to aspire, at (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Human-Centered AI: The Aristotelian Approach.Jacob Sparks & Ava Wright - 2023 - Divus Thomas 126 (2):200-218.
    As we build increasingly intelligent machines, we confront difficult questions about how to specify their objectives. One approach, which we call human-centered, tasks the machine with the objective of learning and satisfying human objectives by observing our behavior. This paper considers how human-centered AI should conceive the humans it is trying to help. We argue that an Aristotelian model of human agency has certain advantages over the currently dominant theory drawn from economics.
    Direct download  
     
    Export citation  
     
    Bookmark  
  40.  37
    Domesticating Artificial Intelligence.Luise Müller - 2022 - Moral Philosophy and Politics 9 (2):219-237.
    For their deployment in human societies to be safe, AI agents need to be aligned with value-laden cooperative human life. One way of solving this “problem of value alignment” is to build moral machines. I argue that the goal of building moral machines aims at the wrong kind of ideal, and that instead, we need an approach to value alignment that takes seriously the categorically different cognitive and moral capabilities between human and AI agents, a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  61
    (1 other version)Applying a principle of explicability to AI research in Africa: should we do it?Mary Carman & Benjamin Rosman - 2020 - Ethics and Information Technology 23 (2):107-117.
    Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  42.  32
    Perspectives of patients and clinicians on big data and AI in health: a comparative empirical investigation.Patrik Hummel, Matthias Braun, Serena Bischoff, David Samhammer, Katharina Seitz, Peter A. Fasching & Peter Dabrock - 2024 - AI and Society 39 (6):2973-2987.
    Background Big data and AI applications now play a major role in many health contexts. Much research has already been conducted on ethical and social challenges associated with these technologies. Likewise, there are already some studies that investigate empirically which values and attitudes play a role in connection with their design and implementation. What is still in its infancy, however, is the comparative investigation of the perspectives of different stakeholders. Methods To explore this issue in a multi-faceted manner, we conducted (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  44.  3
    Healthcare Value Assessment: A Critical Review of Economic Outcome Metrics and Future Directions.Masad Turki Almutairi, Ashwaq Mansour Aljohani, Yosef Awad Aljohani, Zaid Awaidh Sh Almotairi, Abdulmajeed Ayid Almatrafi, Fuad Mohammed Alahmadi, Theban Abdullah Alghamdi, Abdulaziz Mohamed Alahmed, Ahmed Abdullah Alsharif, Aysha Turki Almutairi, Waleed Taleb N. Almughamisi, Faizah Turki Alharbi, Maryam Ibrahim M. Kdaysah, Shahad Mahbub Aloufi & Theyab Mohammed Aldawsari - forthcoming - Evolutionary Studies in Imaginative Culture:112-131.
    This paper provides a critical review of economic outcome metrics used in healthcare value assessment, emphasizing the evolving landscape of resource allocation, patient-centered approaches, and standardization efforts. With healthcare costs rising globally, the efficient allocation of limited resources is essential. Metrics like Quality-Adjusted Life Years (QALYs), Disability-Adjusted Life Years (DALYs), Incremental Cost-Effectiveness Ratios (ICERs), and Cost-Benefit Analysis (CBA) are central to guiding funding decisions, influencing insurance coverage, and shaping treatment prioritization. Emerging trends, such as the integration of artificial intelligence (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  48
    Language Agents and Malevolent Design.Inchul Yum - 2024 - Philosophy and Technology 37 (104):1-19.
    Language agents are AI systems capable of understanding and responding to natural language, potentially facilitating the process of encoding human goals into AI systems. However, this paper argues that if language agents can achieve easy alignment, they also increase the risk of malevolent agents building harmful AI systems aligned with destructive intentions. The paper contends that if training AI becomes sufficiently easy or is perceived as such, it enables malicious actors, including rogue states, terrorists, and criminal organizations, to create (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  23
    AI-Inclusivity in Healthcare: Motivating an Institutional Epistemic Trust Perspective.Kritika Maheshwari, Christoph Jedan, Imke Christiaans, Mariëlle van Gijn, Els Maeckelberghe & Mirjam Plantinga - 2024 - Cambridge Quarterly of Healthcare Ethics:1-15.
    This paper motivates institutional epistemic trust as an important ethical consideration informing the responsible development and implementation of artificial intelligence (AI) technologies (or AI-inclusivity) in healthcare. Drawing on recent literature on epistemic trust and public trust in science, we start by examining the conditions under which we can have institutional epistemic trust in AI-inclusive healthcare systems and their members as providers of medical information and advice. In particular, we discuss that institutional epistemic trust in AI-inclusive healthcare depends, in part, on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47.  47
    Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice.Hannah Bleher & Matthias Braun - 2023 - Science and Engineering Ethics 29 (3):1-21.
    Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory–practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  48.  20
    AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare.Laura Arbelaez Ossa, Stephen R. Milford, Michael Rost, Anja K. Leist, David M. Shaw & Bernice S. Elger - 2024 - Science and Engineering Ethics 30 (3):1-21.
    While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI’s beneficial outputs and concerns about the challenges of human–computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49.  62
    Ethical and legal challenges of AI in marketing: an exploration of solutions.Dinesh Kumar & Nidhi Suthar - 2024 - Journal of Information, Communication and Ethics in Society 22 (1):124-144.
    Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions.,The paper synthesises information from academic articles, industry reports, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  10
    Moving beyond Technical Issues to Stakeholder Involvement: Key Areas for Consideration in the Development of Human-Centred and Trusted AI in Healthcare.Jane Kaye, Nisha Shah, Atsushi Kogetsu, Sarah Coy, Amelia Katirai, Machie Kuroda, Yan Li, Kazuto Kato & Beverley Anne Yamamoto - 2024 - Asian Bioethics Review 16 (3):501-511.
    Discussion around the increasing use of AI in healthcare tends to focus on the technical aspects of the technology rather than the socio-technical issues associated with implementation. In this paper, we argue for the development of a sustained societal dialogue between stakeholders around the use of AI in healthcare. We contend that a more human-centred approach to AI implementation in healthcare is needed which is inclusive of the views of a range of stakeholders. We identify four key areas to support (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 982