Human Detection of Machine-Manipulated Media

old dictators

By Matthew Groh, Ziv Epstein, Nick Obradovich, Manuel Cebrian, Iyad Rahwan

The recent emergence of artificial intelligence (AI)-powered media manipulations has widespread societal implications for journalism and democracy, national security, and art. AI models have the potential to scale misinformation to unprecedented levels by creating various forms of synthetic media. For example, AI systems can synthesize realistic video portraits of an individual with full control of facial expressions, including eye and lip movement; clone a speaker's voice with a few training samples and generate new natural-sounding audio of something the speaker never said; synthesize visually indicated sound effects; generate high-quality, relevant text based on an initial prompt; produce photorealistic images of a variety of objects from text inputs; and generate photorealistic videos of people expressing emotions from only a single image. The technologies for producing machine-generated, fake media online may outpace the ability to manually detect and respond to such media.

Related Content