Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

NIST releases a tool for testing AI model risk | TechCrunch

Jul 29, 2024 - news.bensbites.com
The National Institute of Standards and Technology (NIST) has re-released Dioptra, a testbed designed to measure how malicious attacks might degrade the performance of an AI system. The open-source tool, first released in 2022, helps companies training AI models assess, analyze, and track AI risks. Dioptra can be used to benchmark and research models, as well as provide a platform for exposing models to simulated threats in a "red-teaming" environment.

Dioptra is a product of President Joe Biden’s executive order on AI, which mandates that NIST help with AI system testing. However, Dioptra only works on models that can be downloaded and used locally. Models gated behind an API, such as OpenAI’s GPT-4o, are currently not supported. NIST does not claim that Dioptra can completely de-risk models, but suggests it can shed light on potential attacks and quantify their impact on performance.

Key takeaways:

  • The National Institute of Standards and Technology (NIST) has re-released Dioptra, a testbed designed to measure how malicious attacks might degrade the performance of an AI system, particularly attacks that “poison” AI model training data.
  • Dioptra is an open source web-based tool that can be used to benchmark and research models, as well as provide a platform for exposing models to simulated threats in a “red-teaming” environment.
  • The tool is a product of President Joe Biden’s executive order on AI, which mandates that NIST help with AI system testing and establishes standards for AI safety and security.
  • Dioptra can shed light on which sorts of attacks might make an AI system perform less effectively and quantify this impact to performance. However, it only works out-of-the-box on models that can be downloaded and used locally.
View Full Article

Comments (0)

Be the first to comment!