A Formal Account of AI Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness

Aies '24: Proceedings of the 2024 Aaai/Acmconference on Ai, Ethics, and Society (forthcoming)
  Copy   BIBTEX

Abstract

This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization to quantify trustworthiness by assessing the discrepancies between expected and observed behaviors and how these affect perceived uncertainty and trust. The paper provides a formalization for measuring trustworthiness, taking into account both perceived and intrinsic characteristics. By detailing the factors that influence trust, this study aims to foster more ethical and widely accepted AI technologies, ensuring they meet both functional and ethical criteria.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2024-08-18

Downloads
179 (#134,203)

6 months
179 (#19,956)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Piercosma Bisconti
Scuola Superiore di Studi Universitari e di Perfezionamento Sant'Anna

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references