• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Article

Experts call for legal ‘safe harbor’ so researchers, journalists and artists can evaluate AI tools

By Sharon Goldman

According to a new paper published by 23 AI researchers, academics and creatives, ‘safe harbor’ legal and technical protections are essential to allow researchers, journalists and artists to do “good-faith” evaluations of AI products and services.

Despite the need for independent evaluation, the paper says, conducting research related to these vulnerabilities is often legally prohibited by the terms of service for popular AI models, including those of OpenAI, Google, Anthropic, Inflection, Meta, and Midjourney. The paper’s authors called on tech companies to indemnify public interest AI research and protect it from account suspensions or legal reprisal.

“While these terms are intended as a deterrent against malicious actors, they also inadvertently restrict AI safety and trustworthiness research; companies forbid the research and may enforce their policies with account suspensions,” said a blog post accompanying the paper.

Two of the paper’s co-authors, Shayne Longpre of MIT Media Lab and Sayash Kapoor of Princeton University, explained to VentureBeat that this is particularly important when, for example, in a recent effort to dismiss parts of the New York Times’ lawsuit, OpenAI characterized the Times’ evaluation of ChatGPT as “hacking.” The Times’ lead counsel responded by saying, “What OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced the Times’s copyrighted works.”

Related Content