Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI launches a red teaming network to make its models more robust | TechCrunch

Sep 19, 2023 - techcrunch.com
OpenAI has launched the OpenAI Red Teaming Network, a group of contracted experts, to enhance its AI model risk assessment and mitigation strategies. The Red Teaming Network aims to identify biases in AI models and inform the development process. The network will work with scientists, research institutions, and civil society organizations, and members will be selected based on their expertise. The initiative is open to experts from various domains, including linguistics, biometrics, finance, and healthcare.

However, some critics argue that red teaming is not sufficient. They propose "violet teaming", which involves identifying potential harm an AI system might cause to an institution or public good, and then developing tools using the same system to protect these entities. Despite this, red teaming networks like OpenAI's are currently seen as the best available option.

Key takeaways:

  • OpenAI has launched the OpenAI Red Teaming Network, a group of contracted experts to help assess and mitigate risks in the company’s AI models.
  • Red teaming is an important step in AI model development, helping to identify biases in models and issues with safety filters.
  • The Red Teaming Network will work with scientists, research institutions, and civil society organizations, and members will be called upon based on their expertise at various stages of the model and product development lifecycle.
  • Despite the benefits of red teaming, some experts argue it is not enough and advocate for 'violet teaming', which involves identifying potential harm to institutions or public goods and developing tools to defend against this.
View Full Article

Comments (0)

Be the first to comment!