However, some critics argue that red teaming is not sufficient. They propose "violet teaming", which involves identifying potential harm an AI system might cause to an institution or public good, and then developing tools using the same system to protect these entities. Despite this, red teaming networks like OpenAI's are currently seen as the best available option.
Key takeaways:
- OpenAI has launched the OpenAI Red Teaming Network, a group of contracted experts to help assess and mitigate risks in the company’s AI models.
- Red teaming is an important step in AI model development, helping to identify biases in models and issues with safety filters.
- The Red Teaming Network will work with scientists, research institutions, and civil society organizations, and members will be called upon based on their expertise at various stages of the model and product development lifecycle.
- Despite the benefits of red teaming, some experts argue it is not enough and advocate for 'violet teaming', which involves identifying potential harm to institutions or public goods and developing tools to defend against this.