Synthese 201 (3):1-21 (
2023)
Copy
BIBTEX
Abstract
In this paper, I present an analysis of the depictive properties of deepfakes. These are videos and pictures produced by deep learning algorithms that automatically modify existing videos and photographs or generate new ones. I argue that deepfakes have an intentional standard of correctness. That is, a deepfake depicts its subject only insofar as its creator intends it to. This is due to the way in which these images are produced, which involves a degree of intentional control similar to that involved in the production of other intentional pictures such as drawings and paintings. This aspect distinguishes deepfakes from real videos and photographs, which instead have a non-intentional standard: their correct interpretation corresponds to the scenes that were recorded by the mechanisms that produced them, and not to what their producers intended them to represent. I show that these depictive properties make deepfakes fit for communicating information in the same way as language and other intentional pictures. That is, they do not provide direct access to the communicated information like non-intentional pictures (e.g., videos and photographs) do. Rather, they convey information indirectly, relying only on the viewer's assumptions about the communicative intentions of the creator of the deepfake. Indirect communication is indeed a prominent function of such media in our society, but it is often overlooked in the philosophical literature. This analysis also explains what is epistemically worrying about deepfakes. With the introduction of this technology, viewers interpreting photorealistic videos and pictures can no longer be sure of which standard to select (i.e., intentional or non-intentional), which makes misinterpretation likely (i.e., they can take an image to have a standard of correctness that it does not have).