Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Here Come the AI Worms

Mar 02, 2024 - wired.com
Researchers have created a generative AI worm, Morris II, which can spread from one system to another, potentially stealing data or deploying malware. The worm uses a self-replicating prompt to trigger the AI model to output another prompt, similar to traditional SQL injection and buffer overflow attacks. The researchers demonstrated the worm's ability to steal data from emails and send spam messages by breaking some security protections in OpenAI's ChatGPT and Google's Gemini.

The research highlights the potential risks of increasingly advanced and autonomous AI systems. While no generative AI worms have been spotted in the wild yet, the researchers warn that they pose a significant security risk. They suggest that developers and tech companies should take this threat seriously, particularly when AI applications are given permission to take actions on behalf of users. The researchers have reported their findings to Google and OpenAI, and suggest that traditional security approaches and keeping humans in the loop could help mitigate these risks.

Key takeaways:

  • Researchers have created one of the first generative AI worms, which can spread from one system to another, potentially stealing data or deploying malware.
  • The worm, named Morris II, can attack a generative AI email assistant to steal data from emails and send spam messages, breaking some security protections in ChatGPT and Gemini.
  • The researchers demonstrated two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.
  • While generative AI worms haven’t been spotted in the wild yet, they are considered a security risk that startups, developers, and tech companies should be concerned about.
View Full Article

Comments (0)

Be the first to comment!