Just accountability structures – a way to promote the safe use of automated decision-making in the public sector

AI and Society 39 (1):155-167 (2024)
  Copy   BIBTEX

Abstract

The growing use of automated decision-making (ADM) systems in the public sector and the need to control these has raised many legal questions in academic research and in policymaking. One of the timely means of legal control is accountability, which traditionally includes the ability to impose sanctions on the violator as one dimension. Even though many risks regarding the use of ADM have been noted and there is a common will to promote the safety of these systems, the relevance of the safety research has been discussed little in this context. In this article, I evaluate regulating accountability over the use of ADM in the public sector in relation to the findings of safety research. I conducted the study by focusing on ongoing regulatory projects regarding ADM, the Finnish ADM legislation draft and the EU proposal for the AI Act. The critical question raised in the article is what the role of sanctions is. I ask if official accountability could mean more of an opportunity to learn from mistakes, share knowledge and compensate for harm instead of control via sanctions.

Other Versions

No versions found

Links

PhilArchive

    This entry is not archived by us. If you are the author and have permission from the publisher, we recommend that you archive it. Many publishers automatically grant permission to authors to archive pre-prints. By uploading a copy of your work, you will enable us to better index it, making it easier to find.

    Upload a copy of this work     Papers currently archived: 106,314

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2023-08-22

Downloads
38 (#662,638)

6 months
10 (#382,743)

Historical graph of downloads
How can I increase my downloads?