Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Top AI researchers say OpenAI, Meta and more hinder independent evaluations

Mar 06, 2024 - washingtonpost.com
Over 100 leading artificial intelligence (AI) researchers have signed an open letter urging generative AI companies to grant investigators access to their systems for safety-testing. The researchers argue that strict company protocols, designed to prevent misuse of AI systems, are hindering independent research and could lead to legal action or account bans. The letter, signed by experts in AI research, policy, and law, was sent to companies including OpenAI, Meta, Anthropic, Google, and Midjourney, urging them to provide a legal and technical safe harbor for researchers to examine their products.

The letter comes as AI companies are increasingly blocking outside auditors from their systems. For instance, OpenAI recently accused the New York Times of "hacking" its ChatGPT chatbot in an attempt to find potential copyright violations. Meta's new terms state that it will revoke the license to its latest large language model, LLaMA 2, if a user alleges the system infringes on intellectual property rights. The researchers argue that this setup fosters favoritism, with companies hand-selecting their evaluators, and call for direct channels for outside researchers to report problems with the companies' tools.

Key takeaways:

  • Over 100 leading AI researchers have signed an open letter urging generative AI companies to allow investigators access to their systems for safety testing, arguing that current company rules are hindering independent research.
  • The letter was sent to companies such as OpenAI, Meta, Anthropic, Google and Midjourney, and was signed by experts in AI research, policy, and law, including Stanford University’s Percy Liang and Pulitzer Prize-winning journalist Julia Angwin.
  • The researchers argue that strict protocols designed to prevent misuse of AI systems are having a negative impact on independent research, with auditors fearing account bans or legal action if they attempt to safety-test AI models without company approval.
  • The letter also calls for a legal and technical safe harbor for researchers to examine products, and for companies to provide direct channels for outside researchers to report issues with their tools.
View Full Article

Comments (0)

Be the first to comment!