Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories

Apr 20, 2024 - theregister.com
Researchers from the University of Illinois Urbana-Champaign have claimed that AI agents, such as OpenAI's GPT-4, can exploit real-world security vulnerabilities by interpreting security advisories. The team tested a dataset of 15 critical vulnerabilities and found that GPT-4 could exploit 87% of them, compared to 0% for other models and open-source vulnerability scanners. The researchers believe that future AI models could be even more capable, potentially making exploitation easier for everyone.

However, without access to the relevant CVE (Common Vulnerabilities and Exposures) description, GPT-4's success rate dropped to just 7%. The researchers argue against limiting public availability of security information as a defense against AI agents, advocating instead for proactive security measures such as regular updates when security patches are released. The study also found that the cost of a successful AI agent attack was significantly lower than hiring a human penetration tester.

Key takeaways:

  • AI agents, particularly OpenAI's GPT-4, can exploit real-world security vulnerabilities by reading security advisories, according to a paper by four University of Illinois Urbana-Champaign computer scientists.
  • The AI model was able to exploit 87 percent of a dataset of 15 one-day vulnerabilities, significantly outperforming other models and open-source vulnerability scanners.
  • The researchers believe that future AI models will be even more capable, potentially making exploitation much easier for everyone.
  • Limiting the public availability of security information is not a viable defense against these AI agents, suggesting the need for proactive security measures such as regular updates when security patches are released.
View Full Article

Comments (0)

Be the first to comment!