The AI Safety Institute, which currently employs 32 people, recently released Inspect, its first set of tools for testing the safety of foundational AI models. The UK Secretary of State for Science, Innovation and Technology, Michelle Donelan, referred to this as a "phase one" effort and highlighted the challenges of benchmarking models and the inconsistency of companies opting in for model vetting. She also hinted at the possibility of more AI legislation in the UK in the future, once the scope of AI risks is better understood.
Key takeaways:
- The AI Safety Institute, co-hosted by the United Kingdom, is planning to open a second location in San Francisco to get closer to the epicenter of AI development.
- The U.K. sees AI and technology as a huge opportunity for economic growth and investment, and aims to work more collaboratively with the U.S. on AI safety initiatives.
- The AI Safety Institute recently released Inspect, its first set of tools for testing the safety of foundational AI models, and plans to present it to regulators at the upcoming AI safety summit in Seoul.
- Long term, the U.K. plans to build out more AI legislation, but will resist doing so until it better understands the scope of AI risks.