Contents
194 found
Order:
1 — 50 / 194
  1. (1 other version)If the Difference Principle Won’t Make a Real Difference in Algorithmic Fairness, What Will? [REVIEW]Reuben Binns - manuscript
    In ‘Rawlsian algorithmic fairness and a missing aggregation property of the difference Principle’, the authors argue that there is a false assumption in algorithmic fairness interventions inspired by John Rawls’ theory of justice. They argue that applying the difference principle at the level of a local algorithmic decision-making context (what they term a ‘constituent situation’), is neither necessary nor sufficient for the difference principle to be upheld at the aggregate level of society at large. I find these arguments compelling. They (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap.Tianqi Kou - manuscript
    Two goals - improving replicability and accountability of Machine Learning research respectively, have accrued much attention from the AI ethics and the Machine Learning community. Despite sharing the measures of improving transparency, the two goals are discussed in different registers - replicability registers with scientific reasoning whereas accountability registers with ethical reasoning. Given the existing challenge of the Responsibility Gap - holding Machine Learning scientists accountable for Machine Learning harms due to them being far from sites of application, this paper (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. (1 other version)Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems.Alex John London & Hoda Heidari - manuscript
    The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum's capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders' ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over the jobs we get, the loans we're granted, the information we see online. Algorithms can and often do wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has been largely neglected. I investigate algorithmic neutrality, tackling three questions: What is algorithmic neutrality? Is it possible? And when we have it in mind, what can we learn about algorithmic bias?
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives.Yves Saint James Aquino, Stacy M. Carter, Nehmat Houssami, Annette Braunack-Mayer, Khin Than Win, Chris Degeling, Lei Wang & Wendy A. Rogers - forthcoming - Journal of Medical Ethics.
    Background There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race). Objectives Our objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias. Methodology The study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers. Results Findings reveal considerable (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  8. MinMax fairness: from Rawlsian Theory of Justice to solution for algorithmic bias.Flavia Barsotti & Rüya Gökhan Koçer - forthcoming - AI and Society:1-14.
    This paper presents an intuitive explanation about why and how Rawlsian Theory of Justice (Rawls in A theory of justice, Harvard University Press, Harvard, 1971) provides the foundations to a solution for algorithmic bias. The contribution of the paper is to discuss and show why Rawlsian ideas in their original form (e.g. the veil of ignorance, original position, and allowing inequalities that serve the worst-off) are relevant to operationalize fairness for algorithmic decision making. The paper also explains how this leads (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Ethical assurance: a practical approach to the responsible design, development, and deployment of data-driven technologies.Christopher Burr & David Leslie - forthcoming - AI and Ethics.
    This article offers several contributions to the interdisciplinary project of responsible research and innovation in data science and AI. First, it provides a critical analysis of current efforts to establish practical mechanisms for algorithmic auditing and assessment to identify limitations and gaps with these approaches. Second, it provides a brief introduction to the methodology of argument-based assurance and explores how it is currently being applied in the development of safety cases for autonomous and intelligent systems. Third, it generalises this method (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Investigating gender and racial biases in DALL-E Mini Images.Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Willem Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano & Colin Klein - forthcoming - Acm Journal on Responsible Computing.
    Generative artificial intelligence systems based on transformers, including both text-generators like GPT-4 and image generators like DALL-E 3, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this paper, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Algorithmic Decision-making, Statistical Evidence and the Rule of Law.Vincent Chiao - forthcoming - Episteme.
    The rapidly increasing role of automation throughout the economy, culture and our personal lives has generated a large literature on the risks of algorithmic decision-making, particularly in high-stakes legal settings. Algorithmic tools are charged with bias, shrouded in secrecy, and frequently difficult to interpret. However, these criticisms have tended to focus on particular implementations, specific predictive techniques, and the idiosyncrasies of the American legal-regulatory regime. They do not address the more fundamental unease about the prospect that we might one day (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. Algorithmic Fairness Criteria as Evidence.Will Fleisher - forthcoming - Ergo: An Open Access Journal of Philosophy.
    Statistical fairness criteria are widely used for diagnosing and ameliorating algorithmic bias. However, these fairness criteria are controversial as their use raises several difficult questions. I argue that the major problems for statistical algorithmic fairness criteria stem from an incorrect understanding of their nature. These criteria are primarily used for two purposes: first, evaluating AI systems for bias, and second constraining machine learning optimization problems in order to ameliorate such bias. The first purpose typically involves treating each criterion as a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Equal accuracy for Andrew and Abubakar—detecting and mitigating bias in name-ethnicity classification algorithms.Lena Hafner, Theodor Peter Peifer & Franziska Sofia Hafner - forthcoming - AI and Society:1-25.
    Uncovering the world’s ethnic inequalities is hampered by a lack of ethnicity-annotated datasets. Name-ethnicity classifiers (NECs) can help, as they are able to infer people’s ethnicities from their names. However, since the latest generation of NECs rely on machine learning and artificial intelligence (AI), they may suffer from the same racist and sexist biases found in many AIs. Therefore, this paper offers an algorithmic fairness audit of three NECs. It finds that the UK-Census-trained EthnicityEstimator displays large accuracy biases with regards (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?Joshua Hatherley - forthcoming - Journal of Medical Ethics.
    It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this ‘the disclosure thesis.’ Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Applicants’ Fairness Perceptions of Algorithm-Driven Hiring Procedures.Maude Lavanchy, Patrick Reichert, Jayanth Narayanan & Krishna Savani - forthcoming - Journal of Business Ethics.
    Despite the rapid adoption of technology in human resource departments, there is little empirical work that examines the potential challenges of algorithmic decision-making in the recruitment process. In this paper, we take the perspective of job applicants and examine how they perceive the use of algorithms in selection and recruitment. Across four studies on Amazon Mechanical Turk, we show that people in the role of a job applicant perceive algorithm-driven recruitment processes as less fair compared to human only or algorithm-assisted (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate how (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  18. Understanding user sensemaking in fairness and transparency in algorithms: algorithmic sensemaking in over-the-top platform.Donghee Shin, Joon Soo Lim, Norita Ahmad & Mohammed Ibahrine - forthcoming - AI and Society:1-14.
    A number of artificial intelligence systems have been proposed to assist users in identifying the issues of algorithmic fairness and transparency. These AI systems use diverse bias detection methods from various perspectives, including exploratory cues, interpretable tools, and revealing algorithms. This study explains the design of AI systems by probing how users make sense of fairness and transparency as they are hypothetical in nature, with no specific ways for evaluation. Focusing on individual perceptions of fairness and transparency, this study examines (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  19. The Ideals Program in Algorithmic Fairness.Rush T. Stewart - forthcoming - AI and Society:1-11.
    I consider statistical criteria of algorithmic fairness from the perspective of the _ideals_ of fairness to which these criteria are committed. I distinguish and describe three theoretical roles such ideals might play. The usefulness of this program is illustrated by taking Base Rate Tracking and its ratio variant as a case study. I identify and compare the ideals of these two criteria, then consider them in each of the aforementioned three roles for ideals. This ideals program may present a way (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  20. An Impossibility Theorem for Base Rate Tracking and Equalized Odds.Rush Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby establishing (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  21. Legitimate Power, Illegitimate Automation: The problem of ignoring legitimacy in automated decision systems.Jake Iain Stone & Brent Mittelstadt - forthcoming - The Association for Computing Machinery Conference on Fairness, Accountability, and Transparency 2024.
    Progress in machine learning and artificial intelligence has spurred the widespread adoption of automated decision systems (ADS). An extensive literature explores what conditions must be met for these systems' decisions to be fair. However, questions of legitimacy -- why those in control of ADS are entitled to make such decisions -- have received comparatively little attention. This paper shows that when such questions are raised theorists often incorrectly conflate legitimacy with either public acceptance or other substantive values such as fairness, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse.Emily Sullivan & Philippe Verreault-Julien - forthcoming - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES’22).
    People are increasingly subject to algorithmic decisions, and it is generally agreed that end-users should be provided an explanation or rationale for these decisions. There are different purposes that explanations can have, such as increasing user trust in the system or allowing users to contest the decision. One specific purpose that is gaining more traction is algorithmic recourse. We first pro- pose that recourse should be viewed as a recommendation problem, not an explanation problem. Then, we argue that the capability (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - 2025 - Philosophical Studies 182 (1):25-53.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer simulation, we (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24. The Epistemic Cost of Opacity: How the Use of Artificial Intelligence Undermines the Knowledge of Medical Doctors in High-Stakes Contexts.Eva Schmidt, Paul Martin Putora & Rianne Fijten - 2025 - Philosophy and Technology 38 (1):1-22.
    Artificial intelligent (AI) systems used in medicine are often very reliable and accurate, but at the price of their being increasingly opaque. This raises the question whether a system’s opacity undermines the ability of medical doctors to acquire knowledge on the basis of its outputs. We investigate this question by focusing on a case in which a patient’s risk of recurring breast cancer is predicted by an opaque AI system. We argue that, given the system’s opacity, as well as the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  25. Policy advice and best practices on bias and fairness in AI.Jose M. Alvarez, Alejandra Bringas Colmenarejo, Alaa Elobaid, Simone Fabbrizzi, Miriam Fahimi, Antonio Ferrara, Siamak Ghodsi, Carlos Mougan, Ioanna Papageorgiou, Paula Reyero, Mayra Russo, Kristen M. Scott, Laura State, Xuan Zhao & Salvatore Ruggieri - 2024 - Ethics and Information Technology 26 (2):1-26.
    The literature addressing bias and fairness in AI models (fair-AI) is growing at a fast pace, making it difficult for novel researchers and practitioners to have a bird’s-eye view picture of the field. In particular, many policy initiatives, standards, and best practices in fair-AI have been proposed for setting principles, procedures, and knowledge bases to guide and operationalize the management of bias and fairness. The first objective of this paper is to concisely survey the state-of-the-art of fair-AI methods and resources, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Do AI systems Allow Online Advertisers to Control Others?Gabriel De Marco & T. Douglas - 2024 - In David Edmonds (ed.), AI Morality. Oxford: Oxford University Press USA.
  27. Artificial Intelligence in Higher Education in South Africa: Some Ethical Considerations.Tanya de Villiers-Botha - 2024 - Kagisano 15:165-188.
    There are calls from various sectors, including the popular press, industry, and academia, to incorporate artificial intelligence (AI)-based technologies in general, and large language models (LLMs) (such as ChatGPT and Gemini) in particular, into various spheres of the South African higher education sector. Nonetheless, the implementation of such technologies is not without ethical risks, notably those related to bias, unfairness, privacy violations, misinformation, lack of transparency, and threats to autonomy. This paper gives an overview of the more pertinent ethical concerns (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. Criteria for Assessing AI-Based Sentencing Algorithms: A Reply to Ryberg.Thomas Douglas - 2024 - Philosophy and Technology 37 (1):1-4.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Generative AI and the Future of Democratic Citizenship.Paul Formosa, Bhanuraj Kashyap & Siavosh Sahebi - 2024 - Digital Government: Research and Practice 2691 (2024/05-ART).
    Generative AI technologies have the potential to be socially and politically transformative. In this paper, we focus on exploring the potential impacts that Generative AI could have on the functioning of our democracies and the nature of citizenship. We do so by drawing on accounts of deliberative democracy and the deliberative virtues associated with it, as well as the reciprocal impacts that social media and Generative AI will have on each other and the broader information landscape. Drawing on this background (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Adaptive Interventions Reducing Social Identity Threat to Increase Equity in Higher Distance Education: A Use Case and Ethical Considerations on Algorithmic Fairness.Laura Froehlich & Sebastian Weydner-Volkmann - 2024 - Journal of Learning Analytics 11 (2):112-122.
    Educational disparities between traditional and non-traditional student groups in higher distance education can potentially be reduced by alleviating social identity threat and strengthening students’ sense of belonging in the academic context. We present a use case of how Learning Analytics and Machine Learning can be applied to develop and implement an algorithm to classify students as at-risk of experiencing social identity threat. These students would be presented with an intervention fostering a sense of belonging. We systematically analyze the intervention’s intended (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  31. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - 2024 - Biolaw Journal 1.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Making a Murderer: How Risk Assessment Tools May Produce Rather Than Predict Criminal Behavior.Donal Khosrowi & Philippe van Basshuysen - 2024 - American Philosophical Quarterly 61 (4):309-325.
    Algorithmic risk assessment tools, such as COMPAS, are increasingly used in criminal justice systems to predict the risk of defendants to reoffend in the future. This paper argues that these tools may not only predict recidivism, but may themselves causally induce recidivism through self-fulfilling predictions. We argue that such “performative” effects can yield severe harms both to individuals and to society at large, which raise epistemic-ethical responsibilities on the part of developers and users of risk assessment tools. To meet these (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. On the site of predictive justice.Seth Lazar & Jake Stone - 2024 - Noûs 58 (3):730-754.
    Optimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. We introduce (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  34. Fair equality of chances for prediction-based decisions.Michele Loi, Anders Herlitz & Hoda Heidari - 2024 - Economics and Philosophy 40 (3):557-580.
    This article presents a fairness principle for evaluating decision-making based on predictions: a decision rule is unfair when the individuals directly impacted by the decisions who are equal with respect to the features that justify inequalities in outcomes do not have the same statistical prospects of being benefited or harmed by them, irrespective of their socially salient morally arbitrary traits. The principle can be used to evaluate prediction-based decision-making from the point of view of a wide range of antecedently specified (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms.Kristof Meding & Thilo Hagendorff - 2024 - Philosophy and Technology 37 (1):1-22.
    Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body of literature develops new approaches to quantify fairness. Here, we investigate how one can divert the quantification of fairness by describing a practice we call “fairness hacking” for the purpose of shrouding unfairness in algorithms. This impacts end-users who rely on learning algorithms, as well as the broader community interested in fair AI (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. Conformism, Ignorance & Injustice: AI as a Tool of Epistemic Oppression.Martin Miragoli - 2024 - Episteme: A Journal of Social Epistemology:1-19.
    From music recommendation to assessment of asylum applications, machine-learning algorithms play a fundamental role in our lives. Naturally, the rise of AI implementation strategies has brought to public attention the ethical risks involved. However, the dominant anti-discrimination discourse, too often preoccupied with identifying particular instances of harmful AIs, has yet to bring clearly into focus the more structural roots of AI-based injustice. This paper addresses the problem of AI-based injustice from a distinctively epistemic angle. More precisely, I argue that the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. New Possibilities for Fair Algorithms.Michael Nielsen & Rush Stewart - 2024 - Philosophy and Technology 37 (4):1-17.
    We introduce a fairness criterion that we call Spanning. Spanning i) is implied by Calibration, ii) retains interesting properties of Calibration that some other ways of relaxing that criterion do not, and iii) unlike Calibration and other prominent ways of weakening it, is consistent with Equalized Odds outside of trivial cases.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  38. Spanning in and Spacing out? A Reply to Eva.Michael Nielsen & Rush Stewart - 2024 - Philosophy and Technology 37 (4):1-4.
    We reply to Eva's comment on our "New Possibilities for Fair Algorithms," comparing and contrasting our Spanning criterion with his suggested Spacing criterion.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  39. What’s Impossible about Algorithmic Fairness?Otto Sahlgren - 2024 - Philosophy and Technology 37 (4):1-23.
    The now well-known impossibility results of algorithmic fairness demonstrate that an error-prone predictive model cannot simultaneously satisfy two plausible conditions for group fairness apart from exceptional circumstances where groups exhibit equal base rates. The results sparked, and continue to shape, lively debates surrounding algorithmic fairness conditions and the very possibility of building fair predictive models. This article, first, highlights three underlying points of disagreement in these debates, which have led to diverging assessments of the feasibility of fairness in prediction-based decision-making. (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. A Genealogical Approach to Algorithmic Bias.Marta Ziosi, David Watson & Luciano Floridi - 2024 - Minds and Machines 34 (2):1-17.
    The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires ex post solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41. ACROCPoLis: A Descriptive Framework for Making Sense of Fairness.Andrea Aler Tubella, Dimitri Coelho Mollo, Adam Dahlgren, Hannah Devinney, Virginia Dignum, Petter Ericson, Anna Jonsson, Tim Kampik, Tom Lenaerts, Julian Mendez & Juan Carlos Nieves Sanchez - 2023 - Proceedings of the 2023 Acm Conference on Fairness, Accountability, and Transparency:1014-1025.
    Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available. However, many of the fairness solutions proposed revolve around technical considerations and not the needs of and consequences for the most impacted communities. We therefore want to take the focus away from definitions and allow for the inclusion of societal and relational aspects to represent how the effects of AI systems impact and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  42. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. Big Data as Tracking Technology and Problems of the Group and its Members.Haleh Asgarinia - 2023 - In Kevin Macnish & Adam Henschke (eds.), The Ethics of Surveillance in Times of Emergency. Oxford University Press. pp. 60-75.
    Digital data help data scientists and epidemiologists track and predict outbreaks of disease. Mobile phone GPS data, social media data, or other forms of information updates such as the progress of epidemics are used by epidemiologists to recognize disease spread among specific groups of people. Targeting groups as potential carriers of a disease, rather than addressing individuals as patients, risks causing harm to groups. While there are rules and obligations at the level of the individual, we have to reach a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Melting contestation: insurance fairness and machine learning.Laurence Barry & Arthur Charpentier - 2023 - Ethics and Information Technology 25 (4):1-13.
    With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. From AI for people to AI for the world and the universe.Seth D. Baum & Andrea Owe - 2023 - AI and Society 38 (2):679-680.
    Recent work in AI ethics often calls for AI to advance human values and interests. The concept of “AI for people” is one notable example. Though commendable in some respects, this work falls short by excluding the moral significance of nonhumans. This paper calls for a shift in AI ethics to more inclusive paradigms such as “AI for the world” and “AI for the universe”. The paper outlines the case for more inclusive paradigms and presents implications for moral philosophy and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use.Joachim Baumann & Michele Loi - 2023 - Philosophy and Technology 36 (3):1-31.
    Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  47. Reconciling Algorithmic Fairness Criteria.Fabian Beigang - 2023 - Philosophy and Public Affairs 51 (2):166-190.
    Philosophy &Public Affairs, Volume 51, Issue 2, Page 166-190, Spring 2023.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  48. Yet Another Impossibility Theorem in Algorithmic Fairness.Fabian Beigang - 2023 - Minds and Machines 33 (4):715-735.
    In recent years, there has been a surge in research addressing the question which properties predictive algorithms ought to satisfy in order to be considered fair. Three of the most widely discussed criteria of fairness are the criteria called equalized odds, predictive parity, and counterfactual fairness. In this paper, I will present a new impossibility result involving these three criteria of algorithmic fairness. In particular, I will argue that there are realistic circumstances under which any predictive algorithm that satisfies counterfactual (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Knowledge representation and acquisition for ethical AI: challenges and opportunities.Vaishak Belle - 2023 - Ethics and Information Technology 25 (1):1-12.
    Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. Correction: The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):339-340.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 194