A Metacognitive Approach to Trust and a Case Study: Artificial Agency

Computer Ethics - Philosophical Enquiry (CEPE) Proceedings (2019)
  Copy   BIBTEX

Abstract

Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust A, H demands A to be self-assertive about confidence and able to self-correct its own models. In the Bayesian approach trust can be applied not only to humans, but to artificial agents (e.g. Machine Learning algorithms). We explain the advantage the metacognitive trust when compared to mainstream approaches and how it relates to virtue epistemology. The metacognitive ethics of trust is swiftly discussed.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2020-02-24

Downloads
410 (#73,989)

6 months
101 (#62,788)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Ioan Muntean
University of Texas Rio Grande Valley

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references