Results for 'AI bias'

985 found
Order:
  1. A Bias Network Approach (BNA) to Encourage Ethical Reflection Among AI Developers.Gabriela Arriagada-Bruneau, Claudia López & Alexandra Davidoff - 2025 - Science and Engineering Ethics 31 (1):1-29.
    We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the "isolationist approach to AI bias," a trend in AI literature where biases are seen as separate occurrence linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  2.  23
    Toleration and Justice in the Laozi: Engaging with Tao Jiang's Origins of Moral-Political Philosophy in Early China.Ai Yuan - 2023 - Philosophy East and West 73 (2):466-475.
    In lieu of an abstract, here is a brief excerpt of the content:Toleration and Justice in the Laozi:Engaging with Tao Jiang's Origins of Moral-Political Philosophy in Early ChinaAi Yuan (bio)IntroductionThis review article engages with Tao Jiang's ground-breaking monograph on the Origins of Moral-Political Philosophy in Early China with particular focus on the articulation of toleration and justice in the Laozi (otherwise called the Daodejing).1 Jiang discusses a naturalistic turn and the re-alignment of values in the Laozi, resulting in a naturalization (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  13
    Out of dataset, out of algorithm, out of mind: a critical evaluation of AI bias against disabled people.Rohan Manzoor, Wajahat Hussain & Muhammad Latif Anjum - forthcoming - AI and Society:1-11.
    Generative AI models are shaping our future. In this work, we discover and expose the bias against physically challenged people in generative models. Generative models (Stable Diffusion XL and DALL·E 3) are unable to generate content related to the physically challenged, e.g., inclusive washroom, even with very detailed prompts. Our analysis reveals that this disability bias emanates from biased AI datasets. We achieve this using a novel strategy to automatically discover bias against underrepresented groups like the physically (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4.  21
    Tackling Racial Bias in AI Systems: Applying the Bioethical Principle of Justice and Insights from Joy Buolamwini’s “Coded Bias” and the “Algorithmic Justice League”.Etaoghene Paul Polo & Donatus Osatofoh Ailodion - 2025 - Bangladesh Journal of Bioethics 16 (1):8-14.
    This paper explores the issue of racial bias in artificial intelligence (AI) through the lens of the bioethical principle of justice, with a focus on Joy Buolamwini’s “Coded Bias” and the work of the “Algorithmic Justice League.” AI technologies, particularly facial recognition systems, have been shown to disproportionately misidentify individuals from marginalised racial groups, raising profound ethical concerns about fairness and equity. The bioethical principle of justice stresses the importance of equal treatment and the protection of vulnerable populations. (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  5.  53
    How we can create the global agreement on generative AI bias: lessons from climate justice.Yong Jin Park - forthcoming - AI and Society:1-3.
  6.  92
    Conservative AI and social inequality: conceptualizing alternatives to bias through social theory.Mike Zajko - 2021 - AI and Society 36 (3):1047-1056.
    In response to calls for greater interdisciplinary involvement from the social sciences and humanities in the development, governance, and study of artificial intelligence systems, this paper presents one sociologist’s view on the problem of algorithmic bias and the reproduction of societal bias. Discussions of bias in AI cover much of the same conceptual terrain that sociologists studying inequality have long understood using more specific terms and theories. Concerns over reproducing societal bias should be informed by an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  7. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  8. The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition.Rosalie Waelen & Michał Wieczorek - 2022 - Philosophy and Technology 35 (2).
    AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  9. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  10.  59
    Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities.Sinead O’Connor & Helen Liu - forthcoming - AI and Society:1-13.
    Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  11.  51
    Bias in algorithms of AI systems developed for COVID-19: A scoping review.Janet Delgado, Alicia de Manuel, Iris Parra, Cristian Moyano, Jon Rueda, Ariel Guersenzvaig, Txetxu Ausin, Maite Cruz, David Casacuberta & Angel Puyol - 2022 - Journal of Bioethical Inquiry 19 (3):407-419.
    To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. ​Studies mentioning biases on AI algorithms developed for contact (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  12.  93
    Bias and Epistemic Injustice in Conversational AI.Sebastian Laacke - 2023 - American Journal of Bioethics 23 (5):46-48.
    According to Russell and Norvig’s (2009) classification, Artificial Intelligence (AI) is the field that aims at building systems which either think rationally, act rationally, think like humans, or...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  13. Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  14.  1
    An Ethics of Deconstruction responding to AI generated Bias.Himanshu Jaysawal & Tanya Yadav - 2024 - Tattva - Journal of Philosophy 16 (2):65-86.
    The paper aims to respond to the ethical concern of biases generated by Artificial Intelligence systems. Even though biases enter an AI network via different channels, its presence in the algorithm can pose serious difficulties. AI systems have an algorithmic way of working where language is ‘formal’ and meaning is ‘fixed’. We employ deconstructive strategies of Jacques Derrida to understand the nature of this problem of AI bias through examination of algorithmic/programming language. Derridean philosophy looks at metaphysics as heavily (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  12
    A Reflexive Approach to the B ias of AI and the Paradox of Enlightenment. 정성훈 - 2021 - Journal of the Society of Philosophical Studies 132:199-227.
    이 글은 효율성과 공정성을 높이기 위해 도입된 알고리즘이 오히려 차별과 불평등을 낳은 사례들에 대한 전 세계적 보고들로 인해, 그리고 특히 최근 한국에서 화제가 된 챗봇 ‘이루다’의 차별 및 혐오 발언으로, 많은 주목을 받고 있는 AI의 편향(bias) 문제에 대한 하나의 접근 방법을 제시하고자 한다. 단순한 기계의 고장과 달리 블랙박스라 불리는 AI의 편향을 낳는 원인들은 복잡하게 얽혀있어서 쉽게 밝히기 어려울 뿐 아니라 편향을 줄이기 위한 기술적 접근이 오히려 다른 종류의 편향을 낳을 수 있다. 그리고 편향에 개입할 수 있는 인간들의 윤리의식을 높이기 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16.  18
    Gender bias in visual generative artificial intelligence systems and the socialization of AI.Larry G. Locke & Grace Hodgdon - forthcoming - AI and Society:1-8.
    Substantial research over the last ten years has indicated that many generative artificial intelligence systems (“GAI”) have the potential to produce biased results, particularly with respect to gender. This potential for bias has grown progressively more important in recent years as GAI has become increasingly integrated in multiple critical sectors, such as healthcare, consumer lending, and employment. While much of the study of gender bias in popular GAI systems is focused on text-based GAI such as OpenAI’s ChatGPT and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias.Ying-Tung Lin, Tzu-Wei Hung & Linus Ta-Lun Huang - 2020 - Philosophy and Technology 34 (S1):65-90.
    This paper focuses on the potential of “equitech”—AI technology that improves equity. Recently, interventions have been developed to reduce the harm of implicit bias, the automatic form of stereotype or prejudice that contributes to injustice. However, these interventions—some of which are assisted by AI-related technology—have significant limitations, including unintended negative consequences and general inefficacy. To overcome these limitations, we propose a two-dimensional framework to assess current AI-assisted interventions and explore promising new ones. We begin by using the case of (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  18. Disability, fairness, and algorithmic bias in AI recruitment.Nicholas Tilmes - 2022 - Ethics and Information Technology 24 (2).
    While rapid advances in artificial intelligence hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s disabilities (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  19.  12
    The preliminary consideration for Discrimination by AI and the responsibility problem - On Algorithm Bias learning and Human agent. 허유선 - 2018 - Korean Feminist Philosophy 29:165-209.
    이 글은 인공지능에 의한 차별과 그 책임 논의를 철학적 차원에서 본격적으로 연구하기에 앞선 예비적 고찰이다. 인공지능에 의한 차별을 철학자들의 연구를 요하는 당면 ‘문제’로 제기하고, 이를 위해 ‘인공지능에 의한 차별’이라는 문제의 성격과 원인을 규명하는 것이 이 글의 주된 목적이다. 인공지능은 기존 차별을 그대로 반복하여 현존하는 차별의 강화 및 영속화를 야기할 수 있으며, 이는 먼 미래의 일이 아니다. 이러한 문제는 현재 발생 중이며 공동체적 대응을 요구한다. 그러나 철학자의 입장에서 그와 관련한 책임 논의를 다루기는 쉽지 않다. 그 이유는 크게 인공지능의 복잡한 기술적 문제와 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  26
    Quantifying inductive bias: AI learning algorithms and Valiant's learning framework.David Haussler - 1988 - Artificial Intelligence 36 (2):177-221.
  21.  35
    Exploring the Question of Bias in AI through a Gender Performative Approach.Gabriele Nino & Francesca Alessandra Lisi - 2024 - Sexuality and Gender Studies Journal 2 (2):14-31.
    The objective of this paper is to examine how artificial intelligence systems (AI) can reproduce phenomena of social discrimination and to develop an ethical strategy for preventing such occurrences. A substantial body of scholarship has demonstrated how AI has the potential to erode the rights of women and LGBT+ individuals, as it is capable of amplifying forms of discrimination that are already pervasive in society. This paper examines the principal approaches that have been put forth to contrast the emergence of (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  22.  2
    Why AI image generators cannot afford to be blind to racial bias.Mustafa Arif & Yoshiyasu Takefuji - forthcoming - AI and Society:1-2.
  23. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Direct download  
     
    Export citation  
     
    Bookmark  
  24.  62
    Policy advice and best practices on bias and fairness in AI.Jose M. Alvarez, Alejandra Bringas Colmenarejo, Alaa Elobaid, Simone Fabbrizzi, Miriam Fahimi, Antonio Ferrara, Siamak Ghodsi, Carlos Mougan, Ioanna Papageorgiou, Paula Reyero, Mayra Russo, Kristen M. Scott, Laura State, Xuan Zhao & Salvatore Ruggieri - 2024 - Ethics and Information Technology 26 (2):1-26.
    The literature addressing bias and fairness in AI models (fair-AI) is growing at a fast pace, making it difficult for novel researchers and practitioners to have a bird’s-eye view picture of the field. In particular, many policy initiatives, standards, and best practices in fair-AI have been proposed for setting principles, procedures, and knowledge bases to guide and operationalize the management of bias and fairness. The first objective of this paper is to concisely survey the state-of-the-art of fair-AI methods (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  72
    Black Boxes and Bias in AI Challenge Autonomy.Craig M. Klugman - 2021 - American Journal of Bioethics 21 (7):33-35.
    In “Artificial Intelligence, Social Media and Depression: A New Concept of Health-Related Digital Autonomy,” Laacke and colleagues posit a revised model of autonomy when using digital algori...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  26.  42
    Correction: Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2024 - AI and Society 39 (5):2637-2637.
  27.  55
    Algorithmic bias in anthropomorphic artificial intelligence: Critical perspectives through the practice of women media artists and designers.Caterina Antonopoulou - 2023 - Technoetic Arts 21 (2):157-174.
    Current research in artificial intelligence (AI) sheds light on algorithmic bias embedded in AI systems. The underrepresentation of women in the AI design sector of the tech industry, as well as in training datasets, results in technological products that encode gender bias, reinforce stereotypes and reproduce normative notions of gender and femininity. Biased behaviour is notably reflected in anthropomorphic AI systems, such as personal intelligent assistants (PIAs) and chatbots, that are usually feminized through various design parameters, such as (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28.  18
    Predicting crime or perpetuating bias? The AI dilemma.Jeena Joseph - forthcoming - AI and Society:1-3.
  29. Feminist AI: Can We Expect Our AI Systems to Become Feminist?Galit Wellner & Tiran Rothman - 2020 - Philosophy and Technology 33 (2):191-205.
    The rise of AI-based systems has been accompanied by the belief that these systems are impartial and do not suffer from the biases that humans and older technologies express. It becomes evident, however, that gender and racial biases exist in some AI algorithms. The question is where the bias is rooted—in the training dataset or in the algorithm? Is it a linguistic issue or a broader sociological current? Works in feminist philosophy of technology and behavioral economics reveal the gender (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  30.  30
    Navigating AI-Enabled Modalities of Representation and Materialization in Architecture: Visual Tropes, Verbal Biases, and Geo-Specificity.Asma Mehan & Sina Mostafavi - 2023 - Plan Journal 8 (2):1-16.
    This research delves into the potential of implementing artificial intelligence in architecture. It specifically provides a critical assessment of AI-enabled workflows, encompassing creative ideation, representation, materiality, and critical thinking, facilitated by prompt-based generative processes. In this context, the paper provides an examination of the concept of hybrid human–machine intelligence. In an era characterized by pervasive data bias and engineered injustices, the concept of hybrid intelligence emerges as a critical tool, enabling the transcendence of preconceived stereotypes, clichés, and linguistic prejudices. (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  31.  37
    Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings.Georgina Curto, Mario Fernando Jojoa Acosta, Flavio Comim & Begoña Garcia-Zapirain - forthcoming - AI and Society:1-16.
    Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  32.  15
    Automation Bias and Procedural Fairness: A Short Guide for the UK Civil Service.John Zerilli, Iñaki Goñi & Matilde Masetti Placci - 2024 - Braid Reports.
    The use of advanced AI and data-driven automation in the public sector poses several organisational, practical, and ethical challenges. One that is easy to underestimate is automation bias, which, in turn, has underappreciated legal consequences. Automation bias is an attitude in which the operator of an autonomous system will defer to its outputs to the point where the operator overlooks or ignores evidence that the system is failing. The legal problem arises when statutory office-holders (or their employees) either (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  33. Health AI Poses Distinct Harms and Potential Benefits for Disabled People.Charles Binkley, Joel Michael Reynolds & Andrew Schuman - 2025 - Nature Medicine 1.
    This piece in Nature Medicine notes the risks that incorporation of AI systems into health care poses to disabled patients and proposes ways to avoid them and instead create benefit.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John, AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  72
    Why AI Art Is Not Art – A Heideggerian Critique.Karl Kraatz & Shi-Ting Xie - 2023 - Synthesis Philosophica 38 (2):235-253.
    AI’s new ability to create artworks is seen as a major challenge to today’s understanding of art. There is a strong tension between people who predict that AI will replace artists and critics who claim that AI art will never be art. Furthermore, recent studies have documented a negative bias towards AI art. This paper provides a philosophical explanation for this negative bias, based on our shared understanding of the ontological differences between objects. We argue that our perception (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  37.  71
    AI employment decision-making: integrating the equal opportunity merit principle and explainable AI.Gary K. Y. Chan - forthcoming - AI and Society.
    Artificial intelligence tools used in employment decision-making cut across the multiple stages of job advertisements, shortlisting, interviews and hiring, and actual and potential bias can arise in each of these stages. One major challenge is to mitigate AI bias and promote fairness in opaque AI systems. This paper argues that the equal opportunity merit principle is an ethical approach for fair AI employment decision-making. Further, explainable AI can mitigate the opacity problem by placing greater emphasis on enhancing the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  68
    AI support for ethical decision-making around resuscitation: proceed with care.Nikola Biller-Andorno, Andrea Ferrario, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis & Michael Krauthammer - 2022 - Journal of Medical Ethics 48 (3):175-183.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  39. Can AI Achieve Common Good and Well-being? Implementing the NSTC's R&D Guidelines with a Human-Centered Ethical Approach.Jr-Jiun Lian - 2024 - 2024 Annual Conference on Science, Technology, and Society (Sts) Academic Paper, National Taitung University. Translated by Jr-Jiun Lian.
    This paper delves into the significance and challenges of Artificial Intelligence (AI) ethics and justice in terms of Common Good and Well-being, fairness and non-discrimination, rational public deliberation, and autonomy and control. Initially, the paper establishes the groundwork for subsequent discussions using the Academia Sinica LLM incident and the AI Technology R&D Guidelines of the National Science and Technology Council(NSTC) as a starting point. In terms of justice and ethics in AI, this research investigates whether AI can fulfill human common (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  40. What is AI safety? What do we want it to be?Jacqueline Harding & Cameron Domenico Kirk-Giannini - manuscript
    The field of AI safety seeks to prevent or reduce the harms caused by AI systems. A simple and appealing account of what is distinctive of AI safety as a field holds that this feature is constitutive: a research project falls within the purview of AI safety just in case it aims to prevent or reduce the harms caused by AI systems. Call this appealingly simple account The Safety Conception of AI safety. Despite its simplicity and appeal, we argue that (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  41. Generative AI entails a credit–blame asymmetry.Sebastian Porsdam Mann, Brian D. Earp, Sven Nyholm, John Danaher, Nikolaj Møller, Hilary Bowman-Smart, Joshua Hatherley, Julian Koplin, Monika Plozza, Daniel Rodger, Peter V. Treit, Gregory Renard, John McMillan & Julian Savulescu - 2023 - Nature Machine Intelligence 5 (5):472-475.
    Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  42. The Whiteness of AI.Stephen Cave & Kanta Dihal - 2020 - Philosophy and Technology 33 (4):685-703.
    This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  43.  43
    Who's really afraid of AI?: Anthropocentric bias and postbiological evolution.Milan M. Ćirković - 2022 - Belgrade Philosophical Annual 35:17-29.
    The advent of artificial intelligence (AI) systems has provoked a lot of discussions in both epistemological, bioethical and risk-analytic terms, much of it rather paranoid in nature. Unless one takes an extreme anthropocentric and chronocentric stance, this process can be safely regarded as part and parcel of the sciences of the origin. In this contribution, I would like to suggest that at least four different classes of arguments could be brought forth against the proposition that AI - either human-level or (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice.Marie Oldfield - 2021 - AI and Ethics 1 (1):1.
    AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Algorithms are not neutral: Bias in collaborative filtering.Catherine Stinson - 2021 - AI and Ethics 2 (4):763-770.
    When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory. The question of whether algorithms themselves can be among the sources of bias has been the subject of recent debate among Artificial Intelligence researchers, and scholars who study the social impact of technology. There has been a tendency to focus on examples, where the data set used to train the AI is biased, and denial (...)
     
    Export citation  
     
    Bookmark   2 citations  
  47.  52
    Addressing bias in artificial intelligence for public health surveillance.Lidia Flores, Seungjun Kim & Sean D. Young - 2024 - Journal of Medical Ethics 50 (3):190-194.
    Components of artificial intelligence (AI) for analysing social big data, such as natural language processing (NLP) algorithms, have improved the timeliness and robustness of health data. NLP techniques have been implemented to analyse large volumes of text from social media platforms to gain insights on disease symptoms, understand barriers to care and predict disease outbreaks. However, AI-based decisions may contain biases that could misrepresent populations, skew results or lead to errors. Bias, within the scope of this paper, is described (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. AI-Assisted Decision-making in Healthcare: The Application of an Ethics Framework for Big Data in Health and Research.Tamra Lysaght, Hannah Yeefen Lim, Vicki Xafis & Kee Yuan Ngiam - 2019 - Asian Bioethics Review 11 (3):299-314.
    Artificial intelligence is set to transform healthcare. Key ethical issues to emerge with this transformation encompass the accountability and transparency of the decisions made by AI-based systems, the potential for group harms arising from algorithmic bias and the professional roles and integrity of clinicians. These concerns must be balanced against the imperatives of generating public benefit with more efficient healthcare systems from the vastly higher and accurate computational power of AI. In weighing up these issues, this paper applies the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  49.  79
    AI and Phronesis.Dan Feldman & Nir Eisikovits - 2022 - Moral Philosophy and Politics 9 (2):181-199.
    We argue that the growing prevalence of statistical machine learning in everyday decision making – from creditworthiness to police force allocation – effectively replaces many of our humdrum practical judgments and that this will eventually undermine our capacity for making such judgments. We lean on Aristotle’s famous account of how phronesis and moral virtues develop to make our case. If Aristotle is right that the habitual exercise of practical judgment allows us to incrementally hone virtues, and if AI saves us (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  50.  46
    AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies.Justyna Stypinska - 2023 - AI and Society 38 (2):665-677.
    In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 985