Order:
  1. The Ghost in the Machine has an American accent: value conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts large language models (LLMs). (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  36
    The latent space of data ethics.Enrico Panai - 2024 - AI and Society 39 (6):2647-2665.
    In informationally mature societies, almost all organisations record, generate, process, use, share and disseminate data. In particular, the rise of AI and autonomous systems has corresponded to an improvement in computational power and in solving complex problems. However, the resulting possibilities have been coupled with an upsurge of ethical risks. To avoid the misuse, underuse, and harmful use of data and data-based systems like AI, we should use an ethical framework appropriate to the object of its reasoning. Unfortunately, in recent (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3. AI-enhanced nudging: A Risk-factors Analysis.Marianna Bergamaschi Ganapini & Enrico Panai - forthcoming - American Philosophical Quarterly.
    Artificial intelligent technologies are utilized to provide online personalized recommendations, suggestions, or prompts that can influence people's decision-making processes. We call this AI-enhanced nudging (or AI-nudging for short). Contrary to the received wisdom we claim that AI-enhanced nudging is not necessarily morally problematic. To start assessing the risks and moral import of AI-nudging we believe that we should adopt a risk-factor analysis: we show that both the level of risk and possibly the moral value of adopting AI-nudging ultimately depend on (...)
     
    Export citation  
     
    Bookmark  
  4.  4
    Nullius in Explanans: an ethical risk assessment for explainable AI.Luca Nannini, Diletta Huyskes, Enrico Panai, Giada Pistilli & Alessio Tartaro - 2025 - Ethics and Information Technology 27 (1):1-28.
    Explanations are conceived to ensure the trustworthiness of AI systems. Yet, relying solemnly on algorithmic solutions, as provided by explainable artificial intelligence (XAI), might fall short to account for sociotechnical risks jeopardizing their factuality and informativeness. To mitigate these risks, we delve into the complex landscape of ethical risks surrounding XAI systems and their generated explanations. By employing a literature review combined with rigorous thematic analysis, we uncover a diverse array of technical risks tied to the robustness, fairness, and evaluation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark