Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

U.K. agency releases tools to test AI model safety | TechCrunch

May 11, 2024 - techcrunch.com
The U.K. Safety Institute has launched an open-source toolset called Inspect to enhance AI safety. The toolset is designed to evaluate AI models' capabilities, including their core knowledge and reasoning ability, and generate a score based on the results. This marks the first time a state-backed body has released an AI safety testing platform for wider use. Inspect consists of three components: data sets, solvers, and scorers, and can be extended with third-party Python packages.

The release of Inspect follows the launch of NIST GenAI by the U.S.'s National Institute of Standards and Technology, a program to assess various generative AI technologies. In April, the U.S. and U.K. announced a partnership to jointly develop advanced AI model testing, with the U.S. planning to launch its own AI safety institute to evaluate risks from AI and generative AI.

Key takeaways:

  • The U.K. Safety Institute has released an open-source toolset called Inspect, designed to strengthen AI safety by making it easier for various organizations to develop AI evaluations.
  • Inspect aims to assess certain capabilities of AI models, including models’ core knowledge and ability to reason, and generate a score based on the results.
  • Inspect is made up of three basic components: data sets, solvers and scorers, and its built-in components can be augmented via third-party packages written in Python.
  • The release of Inspect follows the launch of NIST GenAI by the National Institute of Standards and Technology (NIST) in the U.S., and precedes the planned launch of a U.S. AI safety institute as part of a U.S.-U.K. partnership to jointly develop advanced AI model testing.
View Full Article

Comments (0)

Be the first to comment!