Publication

Human detection of political deepfakes across transcripts, audio, and video

x

Groh, Matthew, Aruna Sankaranarayanan, Andrew Lippman, and Rosalind Picard. "Human detection of political deepfakes across transcripts, audio, and video." arXiv preprint arXiv:2202.12883 (2022).

Abstract

Recent advances in technology for hyper-realistic visual effects provoke the concern that deepfake videos of political speeches will soon be visually indistinguishable from authentic video recordings. The conventional wisdom in communications research predicts people will fall for fake news more often when the same version of a story is presented as a video rather than text. Here, we evaluate how accurately 41,822 participants distinguish real political speeches from fabrications in an experiment where speeches are randomized to appear as permutations of text, audio, and video. We find access to audio and visual communication modalities improve participants’ accuracy. Here, human judgment relies more on how something is said, the audio-visual cues, than what is said, the speech content. However, we find that reflective reasoning moderates the degree to which participants consider visual information: low performance on the Cognitive Reflection Test is associated with an over-reliance on what is said.

Related Content