Deepfakes and trust in technology

Synthese 202 (5):1-34 (2023)
  Copy   BIBTEX

Abstract

Deepfakes are fake recordings generated by machine learning algorithms. Various philosophical explanations have been proposed to account for their epistemic harmfulness. In this paper, I argue that deepfakes are epistemically harmful because they undermine trust in recording technology. As a result, we are no longer entitled to our default doxastic attitude of believing that P on the basis of a recording that supports the truth of P. Distrust engendered by deepfakes changes the epistemic status of recordings to resemble that of handmade images. Their credibility, like that of testimony, depends partly on the credibility of the source. I consider some proposed technical solutions from a philosophical perspective to show the practical relevance of these suggestions.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,337

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2023-10-20

Downloads
95 (#221,683)

6 months
32 (#115,173)

Historical graph of downloads
How can I increase my downloads?

References found in this work

No references found.

Add more references