Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Anthropic looks to fund a new, more comprehensive generation of AI benchmarks | TechCrunch

Jul 02, 2024 - techcrunch.com
Anthropic, an AI company, is launching a program to fund the development of new benchmarks for evaluating the performance and impact of AI models. The program will provide grants to third-party organizations that can measure advanced capabilities in AI models. The company aims to address the current benchmarking problem in AI, where most benchmarks do not accurately reflect how systems are used in real-world scenarios. The company is particularly interested in creating benchmarks that assess AI's potential for tasks such as carrying out cyberattacks, enhancing weapons of mass destruction, and manipulating or deceiving people.

However, there are concerns about the company's transparency and its commercial ambitions in the AI race. Some experts argue that the company's definitions of "safe" or "risky" AI may not align with others in the field. Additionally, there is skepticism about Anthropic's references to catastrophic and deceptive AI risks, with some experts arguing that there is little evidence to suggest AI will gain world-ending capabilities. Despite these concerns, Anthropic hopes its program will serve as a catalyst for progress towards comprehensive AI evaluation as an industry standard.

Key takeaways:

  • Anthropic is launching a program to fund the development of new benchmarks for evaluating the performance and impact of AI models, including generative models.
  • The company is calling for tests that assess a model’s ability to carry out tasks like cyberattacks, enhance weapons of mass destruction, and manipulate or deceive people.
  • Anthropic also intends to support research into benchmarks that probe AI’s potential for aiding in scientific study, conversing in multiple languages, and mitigating ingrained biases.
  • However, there are concerns that the company's commercial ambitions may influence the definitions of "safe" or "risky" AI, and that its references to "catastrophic" and "deceptive" AI risks may be overstated.
View Full Article

Comments (0)

Be the first to comment!