However, without access to the relevant CVE (Common Vulnerabilities and Exposures) description, GPT-4's success rate dropped to just 7%. The researchers argue against limiting public availability of security information as a defense against AI agents, advocating instead for proactive security measures such as regular updates when security patches are released. The study also found that the cost of a successful AI agent attack was significantly lower than hiring a human penetration tester.
Key takeaways:
- AI agents, particularly OpenAI's GPT-4, can exploit real-world security vulnerabilities by reading security advisories, according to a paper by four University of Illinois Urbana-Champaign computer scientists.
- The AI model was able to exploit 87 percent of a dataset of 15 one-day vulnerabilities, significantly outperforming other models and open-source vulnerability scanners.
- The researchers believe that future AI models will be even more capable, potentially making exploitation much easier for everyone.
- Limiting the public availability of security information is not a viable defense against these AI agents, suggesting the need for proactive security measures such as regular updates when security patches are released.