Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Google DeepMind launches new framework to assess the dangers of AI models | Semafor

May 17, 2024 - semafor.com
Google DeepMind has released a framework for monitoring AI models to prevent them from developing dangerous capabilities. The process involves reevaluating the models every time the compute power used to train them increases six-fold, or they are fine-tuned for three months. Between evaluations, early warning evaluations will be designed. DeepMind will collaborate with other companies, academia, and lawmakers to enhance the framework and plans to implement its auditing tools by 2025.

The current evaluation of powerful AI models is an ad hoc process that evolves as new techniques are developed. However, as AI models become more powerful, a more robust process is needed. The Frontier Safety Framework by DeepMind aims to address this issue, joining other major tech companies like Meta, OpenAI, and Microsoft in their efforts to mitigate concerns about AI.

Key takeaways:

  • Google DeepMind has released a framework for monitoring AI models to determine if they're approaching dangerous capabilities.
  • The process involves reevaluating DeepMind’s models every time the compute power used to train the model increases six-fold, or is fine-tuned for three months.
  • DeepMind plans to work with other companies, academia, and lawmakers to improve the framework and aims to start implementing its auditing tools by 2025.
  • The Frontier Safety Framework is one of several methods announced by major tech companies, including Meta, OpenAI, and Microsoft, to mitigate concerns about AI.
View Full Article

Comments (0)

Be the first to comment!