The letter comes as AI companies are increasingly blocking outside auditors from their systems. For instance, OpenAI recently accused the New York Times of "hacking" its ChatGPT chatbot in an attempt to find potential copyright violations. Meta's new terms state that it will revoke the license to its latest large language model, LLaMA 2, if a user alleges the system infringes on intellectual property rights. The researchers argue that this setup fosters favoritism, with companies hand-selecting their evaluators, and call for direct channels for outside researchers to report problems with the companies' tools.
Key takeaways:
- Over 100 leading AI researchers have signed an open letter urging generative AI companies to allow investigators access to their systems for safety testing, arguing that current company rules are hindering independent research.
- The letter was sent to companies such as OpenAI, Meta, Anthropic, Google and Midjourney, and was signed by experts in AI research, policy, and law, including Stanford University’s Percy Liang and Pulitzer Prize-winning journalist Julia Angwin.
- The researchers argue that strict protocols designed to prevent misuse of AI systems are having a negative impact on independent research, with auditors fearing account bans or legal action if they attempt to safety-test AI models without company approval.
- The letter also calls for a legal and technical safe harbor for researchers to examine products, and for companies to provide direct channels for outside researchers to report issues with their tools.