17 found

View year:

  1.  1
    Responsible guidelines for authorship attribution tasks in NLP.Vageesh Saxena, Aurelia Tamò-Larrieux, Gijs Van Dijck & Gerasimos Spanakis - 2025 - Ethics and Information Technology 27 (2).
    Authorship Attribution (AA) approaches in Natural Language Processing (NLP) are important in various domains, including forensic analysis and cybercrime. However, they pose Ethical, Legal, and Societal Implications/Aspects (ELSI/ELSA) challenges that remain underexplored. Inspired by foundational AI ethics guidelines and frameworks, this research introduces a comprehensive framework of responsible guidelines that focuses on AA tasks in NLP, which are tailored to different stakeholders and development phases. These guidelines are structured around four core principles: privacy and data protection, fairness and non-discrimination, transparency (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2.  11
    Urban Digital Twins and metaverses towards city multiplicities: uniting or dividing urban experiences?Javier Argota Sánchez-Vaquerizo - 2025 - Ethics and Information Technology 27 (1):1-31.
    Urban Digital Twins (UDTs) have become the new buzzword for researchers, planners, policymakers, and industry experts when it comes to designing, planning, and managing sustainable and efficient cities. It encapsulates the last iteration of the technocratic and ultra-efficient, post-modernist vision of smart cities. However, while more applications branded as UDTs appear around the world, its conceptualization remains ambiguous. Beyond being technically prescriptive about what UDTs are, this article focuses on their aspects of interaction and operationalization in connection to people in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  5
    Correction: Beyond transparency and explainability: on the need for adequate and contextualized user guidelines for LLM use.Kristian González Barman, Nathan Wood & Pawel Pawlowski - 2025 - Ethics and Information Technology 27 (1):1-1.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  13
    Dating apps as tools for social engineering.Martin Beckstein & Bouke De Vries - 2025 - Ethics and Information Technology 27 (1):1-13.
    In a bid to boost their below-replacement fertility levels, some countries, such as China, India, Iran, and Japan, have launched state-sponsored dating apps, with more potentially following. However, the use of dating apps as tools for social engineering has been largely neglected by political theorists and public policy experts. This article fills this gap. While acknowledging the risks and historical baggage of social engineering, the article provides a qualified defense of using these apps for three purposes: raising below-replacement birth rates, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  11
    Procedural fairness in algorithmic decision-making: the role of public engagement.Marie Christin Decker, Laila Wegner & Carmen Leicht-Scholten - 2025 - Ethics and Information Technology 27 (1):1-16.
    Despite the widespread use of automated decision-making (ADM) systems, they are often developed without involving the public or those directly affected, leading to concerns about systematic biases that may perpetuate structural injustices. Existing formal fairness approaches primarily focus on statistical outcomes across demographic groups or individual fairness, yet these methods reveal ambiguities and limitations in addressing fairness comprehensively. This paper argues for a holistic approach to algorithmic fairness that integrates procedural fairness, considering both decision-making processes and their outcomes. Procedural fairness (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  73
    AI responsibility gap: not new, inevitable, unproblematic.Huzeyfe Demirtas - 2025 - Ethics and Information Technology 27 (1):1-10.
    Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  7.  98
    Disembodied friendship: virtual friends and the tendencies of technologically mediated friendship.Daniel Grasso - 2025 - Ethics and Information Technology 27 (1):1-11.
    This paper engages the ongoing debate around the possibility of virtue friendships in the Aristotelian sense through online mediation. However, I argue that since the current literature has remained overly focused on the mere possibility of virtual friendship, it has obscured the more common phenomena of using digital communication to sustain previous in-person friendships which are now at a distance. While I agree with those who argue that entirely virtual friendship is possible, I argue that the current rebuttals to the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  8.  15
    Designing responsible agents.Zacharus Gudmunsen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Raul Hakli & Pekka Mäkelä (2016, 2019) make a popular assumption in machine ethics explicit by arguing that artificial agents cannot be responsible because they are designed. Designed agents, they think, are analogous to manipulated humans and therefore not meaningfully in control of their actions. Contrary to this, I argue that under all mainstream theories of responsibility, designed agents can be responsible. To do so, I identify the closest parallel discussion in the literature on responsibility and free will, which concerns (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  22
    Mind the gap: bridging the divide between computer scientists and ethicists in shaping moral machines.Pablo Muruzábal Lamberti, Gunter Bombaerts & Wijnand IJsselsteijn - 2025 - Ethics and Information Technology 27 (1):1-11.
    This paper examines the ongoing challenges of interdisciplinary collaboration in Machine Ethics (ME), particularly the integration of ethical decision-making capacities into AI systems. Despite increasing demands for ethical AI, ethicists often remain on the sidelines, contributing primarily to metaethical discussions without directly influencing the development of moral machines. This paper revisits concerns highlighted by Tolmeijer et al. (2020), who identified the pitfall that computer scientists may misinterpret ethical theories without philosophical input. Using the MACHIAVELLI moral benchmark and the Delphi artificial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  15
    Robots, institutional roles and joint action: some key ethical issues.Seumas Miller - 2025 - Ethics and Information Technology 27 (1):1-11.
    In this article, firstly, cooperative interaction between robots and humans is discussed; specifically, the possibility of human/robot joint action and (relatedly) the possibility of robots occupying institutional roles alongside humans. The discussion makes use of concepts developed in social ontology. Secondly, certain key moral (or ethical—these terms are used interchangeably here) issues arising from this cooperative action are discussed, specifically issues that arise from robots performing (including qua role occupants) morally significant actions jointly with humans. Such morally significant human/robot joint (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. LLMs beyond the lab: the ethics and epistemics of real-world AI research.Joost Mollen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Research under real-world conditions is crucial to the development and deployment of robust AI systems. Exposing large language models to complex use settings yields knowledge about their performance and impact, which cannot be obtained under controlled laboratory conditions or through anticipatory methods. This epistemic need for real-world research is exacerbated by large-language models’ opaque internal operations and potential for emergent behavior. However, despite its epistemic value and widespread application, the ethics of real-world AI research has received little scholarly attention. To (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  17
    Correction: The repugnant resolution: has Coghlan & Cox resolved the Gamer’s Dilemma?Thomas Montefiore & Morgan Luck - 2025 - Ethics and Information Technology 27 (1):1-1.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  8
    Leading good digital lives.Johannes Müller-Salo - 2025 - Ethics and Information Technology 27 (1):1-11.
    The paper develops a conception of the good life within a digitalized society. Martha Nussbaum’s capability theory offers an adequate normative framework for that purpose as it systematically integrates the analysis of flourishing human lives with a normative theory of justice. The paper argues that a theory of good digital lives should focus on everyday life, on the impact digitalization has on ordinary actions, routines and corresponding practical knowledge. Based on Nussbaum’s work, the paper develops a concept of digital capabilities. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  18
    Nullius in Explanans: an ethical risk assessment for explainable AI.Luca Nannini, Diletta Huyskes, Enrico Panai, Giada Pistilli & Alessio Tartaro - 2025 - Ethics and Information Technology 27 (1):1-28.
    Explanations are conceived to ensure the trustworthiness of AI systems. Yet, relying solemnly on algorithmic solutions, as provided by explainable artificial intelligence (XAI), might fall short to account for sociotechnical risks jeopardizing their factuality and informativeness. To mitigate these risks, we delve into the complex landscape of ethical risks surrounding XAI systems and their generated explanations. By employing a literature review combined with rigorous thematic analysis, we uncover a diverse array of technical risks tied to the robustness, fairness, and evaluation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  11
    What responsibility gaps are and what they should be.Herman Veluwenkamp - 2025 - Ethics and Information Technology 27 (1):1-13.
    Responsibility gaps traditionally refer to scenarios in which no one is responsible for harm caused by artificial agents, such as autonomous machines or collective agents. By carefully examining the different ways this concept has been defined in the social ontology and ethics of technology literature, I argue that our current concept of responsibility gaps is defective. To address this conceptual flaw, I argue that the concept of responsibility gaps should be revised by distinguishing it into two more precise concepts: epistemic (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16.  20
    Possibilities and challenges in the moral growth of large language models: a philosophical perspective.Guoyu Wang, Wei Wang, Yiqin Cao, Yan Teng, Qianyu Guo, Haofen Wang, Junyu Lin, Jiajie Ma, Jin Liu & Yingchun Wang - 2025 - Ethics and Information Technology 27 (1):1-11.
    With the rapid expansion of parameters in large language models (LLMs) and the application of Reinforcement Learning with Human Feedback (RLHF), there has been a noticeable growth in the moral competence of LLMs. However, several questions warrant further exploration: Is it really possible for LLMs to fully align with human values through RLHF? How can the current moral growth be philosophically contextualized? We identify similarities between LLMs’ moral growth and Deweyan ethics in terms of the discourse of human moral development. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Trust and Power in Airbnb’s Digital Rating and Reputation System.Tim Christiaens - 2025 - Ethics and Information Technology.
    Customer ratings and reviews are playing a key role in the contemporary platform economy. To establish trust among stran- gers without having to directly monitor platform users themselves, companies ask people to evaluate each other. Firms like Uber, Deliveroo, or Airbnb construct digital reputation scores by combining these consumer data with their own information from the algorithmic surveillance of workers. Trustworthy behavior is subsequently rewarded with a good reputation score and higher potential earnings, while untrustworthy behavior can be algorithmically penalized. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
 Previous issues
  
Next issues