11 found

View year:

  1.  6
    Urban Digital Twins and metaverses towards city multiplicities: uniting or dividing urban experiences?Javier Argota Sánchez-Vaquerizo - 2025 - Ethics and Information Technology 27 (1):1-31.
    Urban Digital Twins (UDTs) have become the new buzzword for researchers, planners, policymakers, and industry experts when it comes to designing, planning, and managing sustainable and efficient cities. It encapsulates the last iteration of the technocratic and ultra-efficient, post-modernist vision of smart cities. However, while more applications branded as UDTs appear around the world, its conceptualization remains ambiguous. Beyond being technically prescriptive about what UDTs are, this article focuses on their aspects of interaction and operationalization in connection to people in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  6
    Dating apps as tools for social engineering.Martin Beckstein & Bouke De Vries - 2025 - Ethics and Information Technology 27 (1):1-13.
    In a bid to boost their below-replacement fertility levels, some countries, such as China, India, Iran, and Japan, have launched state-sponsored dating apps, with more potentially following. However, the use of dating apps as tools for social engineering has been largely neglected by political theorists and public policy experts. This article fills this gap. While acknowledging the risks and historical baggage of social engineering, the article provides a qualified defense of using these apps for three purposes: raising below-replacement birth rates, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  2
    Procedural fairness in algorithmic decision-making: the role of public engagement.Marie Christin Decker, Laila Wegner & Carmen Leicht-Scholten - 2025 - Ethics and Information Technology 27 (1):1-16.
    Despite the widespread use of automated decision-making (ADM) systems, they are often developed without involving the public or those directly affected, leading to concerns about systematic biases that may perpetuate structural injustices. Existing formal fairness approaches primarily focus on statistical outcomes across demographic groups or individual fairness, yet these methods reveal ambiguities and limitations in addressing fairness comprehensively. This paper argues for a holistic approach to algorithmic fairness that integrates procedural fairness, considering both decision-making processes and their outcomes. Procedural fairness (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  24
    AI responsibility gap: not new, inevitable, unproblematic.Huzeyfe Demirtas - 2025 - Ethics and Information Technology 27 (1):1-10.
    Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5.  9
    Mind the gap: bridging the divide between computer scientists and ethicists in shaping moral machines.Pablo Muruzábal Lamberti, Gunter Bombaerts & Wijnand IJsselsteijn - 2025 - Ethics and Information Technology 27 (1):1-11.
    This paper examines the ongoing challenges of interdisciplinary collaboration in Machine Ethics (ME), particularly the integration of ethical decision-making capacities into AI systems. Despite increasing demands for ethical AI, ethicists often remain on the sidelines, contributing primarily to metaethical discussions without directly influencing the development of moral machines. This paper revisits concerns highlighted by Tolmeijer et al. (2020), who identified the pitfall that computer scientists may misinterpret ethical theories without philosophical input. Using the MACHIAVELLI moral benchmark and the Delphi artificial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  5
    Robots, institutional roles and joint action: some key ethical issues.Seumas Miller - 2025 - Ethics and Information Technology 27 (1):1-11.
    In this article, firstly, cooperative interaction between robots and humans is discussed; specifically, the possibility of human/robot joint action and (relatedly) the possibility of robots occupying institutional roles alongside humans. The discussion makes use of concepts developed in social ontology. Secondly, certain key moral (or ethical—these terms are used interchangeably here) issues arising from this cooperative action are discussed, specifically issues that arise from robots performing (including qua role occupants) morally significant actions jointly with humans. Such morally significant human/robot joint (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  6
    LLMs beyond the lab: the ethics and epistemics of real-world AI research.Joost Mollen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Research under real-world conditions is crucial to the development and deployment of robust AI systems. Exposing large language models to complex use settings yields knowledge about their performance and impact, which cannot be obtained under controlled laboratory conditions or through anticipatory methods. This epistemic need for real-world research is exacerbated by large-language models’ opaque internal operations and potential for emergent behavior. However, despite its epistemic value and widespread application, the ethics of real-world AI research has received little scholarly attention. To (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  4
    Correction: The repugnant resolution: has Coghlan & Cox resolved the Gamer’s Dilemma?Thomas Montefiore & Morgan Luck - 2025 - Ethics and Information Technology 27 (1):1-1.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  3
    Leading good digital lives.Johannes Müller-Salo - 2025 - Ethics and Information Technology 27 (1):1-11.
    The paper develops a conception of the good life within a digitalized society. Martha Nussbaum’s capability theory offers an adequate normative framework for that purpose as it systematically integrates the analysis of flourishing human lives with a normative theory of justice. The paper argues that a theory of good digital lives should focus on everyday life, on the impact digitalization has on ordinary actions, routines and corresponding practical knowledge. Based on Nussbaum’s work, the paper develops a concept of digital capabilities. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  4
    Nullius in Explanans: an ethical risk assessment for explainable AI.Luca Nannini, Diletta Huyskes, Enrico Panai, Giada Pistilli & Alessio Tartaro - 2025 - Ethics and Information Technology 27 (1):1-28.
    Explanations are conceived to ensure the trustworthiness of AI systems. Yet, relying solemnly on algorithmic solutions, as provided by explainable artificial intelligence (XAI), might fall short to account for sociotechnical risks jeopardizing their factuality and informativeness. To mitigate these risks, we delve into the complex landscape of ethical risks surrounding XAI systems and their generated explanations. By employing a literature review combined with rigorous thematic analysis, we uncover a diverse array of technical risks tied to the robustness, fairness, and evaluation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11.  7
    Possibilities and challenges in the moral growth of large language models: a philosophical perspective.Guoyu Wang, Wei Wang, Yiqin Cao, Yan Teng, Qianyu Guo, Haofen Wang, Junyu Lin, Jiajie Ma, Jin Liu & Yingchun Wang - 2025 - Ethics and Information Technology 27 (1):1-11.
    With the rapid expansion of parameters in large language models (LLMs) and the application of Reinforcement Learning with Human Feedback (RLHF), there has been a noticeable growth in the moral competence of LLMs. However, several questions warrant further exploration: Is it really possible for LLMs to fully align with human values through RLHF? How can the current moral growth be philosophically contextualized? We identify similarities between LLMs’ moral growth and Deweyan ethics in terms of the discourse of human moral development. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
 Previous issues
  
Next issues