The contest highlights the growing interest in applying red-teaming exercises to AI systems, which are often opaque in their workings and wide-ranging in their potential applications. While lawmakers debate how to regulate the technology, tech giants are racing to show that they can regulate themselves through voluntary initiatives and partnerships. Red-teaming is likely to be a key component of these efforts, with companies such as Google and OpenAI volunteering their latest chatbots and image generators to be put to the test.
Key takeaways:
- A public “red teaming” event was held at Howard University to find novel ways that AI chatbots can go awry, with the aim of fixing these issues before they cause harm.
- The event is a precursor to a larger, public event at Def Con, the annual hacker convention in Las Vegas, where top hackers will induce AI models to err in various ways, including political misinformation, defamatory claims, and systemic bias.
- Leading AI firms such as Google, OpenAI, Anthropic and Stability have volunteered their latest chatbots and image generators to be tested in the competition, with results sealed for several months to allow companies time to address exposed flaws.
- The event highlights the growing interest in applying red-teaming exercises to AI systems, with the belief that these systems are likely to be exploited in surprising ways due to their opaque workings and wide-ranging potential applications.