The current evaluation of powerful AI models is an ad hoc process that evolves as new techniques are developed. However, as AI models become more powerful, a more robust process is needed. The Frontier Safety Framework by DeepMind aims to address this issue, joining other major tech companies like Meta, OpenAI, and Microsoft in their efforts to mitigate concerns about AI.
Key takeaways:
- Google DeepMind has released a framework for monitoring AI models to determine if they're approaching dangerous capabilities.
- The process involves reevaluating DeepMind’s models every time the compute power used to train the model increases six-fold, or is fine-tuned for three months.
- DeepMind plans to work with other companies, academia, and lawmakers to improve the framework and aims to start implementing its auditing tools by 2025.
- The Frontier Safety Framework is one of several methods announced by major tech companies, including Meta, OpenAI, and Microsoft, to mitigate concerns about AI.