Order:
  1.  48
    Use case cards: a use case reporting framework inspired by the European AI Act.Emilia Gómez, Sandra Baldassarri, David Fernández-Llorca & Isabelle Hupont - 2024 - Ethics and Information Technology 26 (2):1-23.
    Despite recent efforts by the Artificial Intelligence (AI) community to move towards standardised procedures for documenting models, methods, systems or datasets, there is currently no methodology focused on use cases aligned with the risk-based approach of the European AI Act (AI Act). In this paper, we propose a new framework for the documentation of use cases that we call use case cards, based on the use case modelling included in the Unified Markup Language (UML) standard. Unlike other documentation methodologies, we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  21
    An interdisciplinary account of the terminological choices by EU policymakers ahead of the final agreement on the AI Act: AI system, general purpose AI system, foundation model, and generative AI.David Fernández-Llorca, Emilia Gómez, Ignacio Sánchez & Gabriele Mazzini - forthcoming - Artificial Intelligence and Law:1-14.
    The European Union’s Artificial Intelligence Act (AI Act) is a groundbreaking regulatory framework that integrates technical concepts and terminology from the rapidly evolving ecosystems of AI research and innovation into the legal domain. Precise definitions accessible to both AI experts and lawyers are crucial for the legislation to be effective. This paper provides an interdisciplinary analysis of the concepts of AI system, general purpose AI system, foundation model and generative AI across the different versions of the legal text (Commission proposal, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3.  43
    Evaluating causes of algorithmic bias in juvenile criminal recidivism.Marius Miron, Songül Tolan, Emilia Gómez & Carlos Castillo - 2020 - Artificial Intelligence and Law 29 (2):111-147.
    In this paper we investigate risk prediction of criminal re-offense among juvenile defendants using general-purpose machine learning algorithms. We show that in our dataset, containing hundreds of cases, ML models achieve better predictive power than a structured professional risk assessment tool, the Structured Assessment of Violence Risk in Youth, at the expense of not satisfying relevant group fairness metrics that SAVRY does satisfy. We explore in more detail two possible causes of this algorithmic bias that are related to biases in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark