Synthese 202 (5):1-34 (
2023)
Copy
BIBTEX
Abstract
Deepfakes are fake recordings generated by machine learning algorithms. Various philosophical explanations have been proposed to account for their epistemic harmfulness. In this paper, I argue that deepfakes are epistemically harmful because they undermine trust in recording technology. As a result, we are no longer entitled to our default doxastic attitude of believing that P on the basis of a recording that supports the truth of P. Distrust engendered by deepfakes changes the epistemic status of recordings to resemble that of handmade images. Their credibility, like that of testimony, depends partly on the credibility of the source. I consider some proposed technical solutions from a philosophical perspective to show the practical relevance of these suggestions.