Over 100 leading AI experts issued an open letter demanding that companies behind generative AI technologies, like OpenAI, Meta, and others, open their doors to independent testing.
Their message is clear: AI developers’ terms and conditions are curbing independent research efforts into AI tool safety.
Co-signees feature leading experts such as Stanford’s Percy Liang, Pulitzer Prize-winner Julia Angwin, Stanford Internet Observatory’s Renée DiResta, Mozilla Fellow Deb Raji, ex-European Parliament member Marietje Schaake, and Suresh Venkatasubramanian from Brown University.
Researchers argue that the lessons from the social media era, when independent research was often marginalized, should not be repeated.
To combat this risk, they ask that OpenAI, Meta, Anthropic, Google, Midjourney, and others create a legal and technical safe space for researchers to evaluate AI products without fearing being sued or banned.
The letter says, “While companies’ terms of service deter malicious use, they also offer no exemption for independent good faith research, leaving researchers at risk of account suspension or even legal reprisal.”
AI tools impose strict usage policies to prevent them from being manipulated into bypassing their guardrails. For example, OpenAI recently branded investigative efforts by the New York Times as “hacking,” and Meta threatened to withdraw licenses over intellectual property disputes.
Other investigations probed MidJourney to reveal numerous instances of copyright violation, which would have been against the company’s T&Cs.
The problem is that since AI tools are largely unpredictable under the hood, they depend on people using them in a specific way to remain ‘safe.’
However, those same policies make it tough for researchers to probe and understand models.
The letter, published on MIT’s website, makes two pleas:
1. “First, a legal safe harbor would indemnify good faith independent AI safety, security, and trustworthiness research, provided it is conducted in accordance with well-established vulnerability disclosure rules.”
2. “Second, companies should commit to more equitable access, by using independent reviewers to moderate researchers’ evaluation applications, which would protect rule-abiding safety research from counterproductive account suspensions, and mitigate the concern of companies selecting their own evaluators.”
The letter also introduces a policy proposal, co-drafted by some signatories, which suggests modifications in the companies’ terms of service to accommodate academic and safety research.
This contributes to broadening consensus about the risks associated with generative AI, including bias, copyright infringement, and the creation of non-consensual intimate imagery.
By advocating for a “safe harbor” for independent evaluation, these experts are championing the cause of public interest, aiming to create an ecosystem where AI technologies can be developed and deployed responsibly, with the well-being of society at the forefront.
The post Researchers join open letter advocating for independent AI evaluations appeared first on DailyAI.