• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Article

How should AI-generated content be labeled?

By Brian Eastwood

In late October, President Joe Biden issued a wide-ranging executive order on AI security and safety. The order includes new standards and best practices for clearly labeling AI-generated content, in part to help Americans determine whether communications that appear to be from the government are authentic.

This points to a concern that as generative AI becomes more widely used, manipulated content could easily spread false information. As the executive order indicates, content labels are one strategy for combatting the spread of misinformation. But what are the right terms to use? Which ones will be widely understood by the public as indicating that something has been generated or manipulated by artificial intelligence technology or is intentionally misleading?

new working paper co-authored by MIT Sloan professor David Rand found that across the United States, Mexico, Brazil, India, and China, people associated certain terms, such as “AI generated” and “AI manipulated,” most closely with content created using AI. Conversely, the labels “deepfake” and “manipulated” were most associated with misleading content, whether AI created it or not.

These results show that most people have a reasonable understanding of what “AI” means, which is a good starting point. They also suggest that any effort to label content needs to consider the overarching goal, said Rand, a professor of management science and brain and cognitive sciences. Rand co-authored the paper with Ziv Epstein, SM ’19 and PhD ’23, a postdoctoral fellow at Stanford; MIT graduate researcher Cathy Fang, SM ’23; and Antonio A. Arechar, a professor at the Center for Research and Teaching in Economics in Aguascalientes, Mexico.

Related Content