The team reported their findings to Google and OpenAI. While Google declined to comment, an OpenAI spokesperson acknowledged the potential for exploitation of prompt-injection vulnerabilities and stated that the company is working on making its systems more resilient. They also advised developers to ensure they are not working with harmful input.
Key takeaways:
- A team of researchers developed a first-generation AI worm named 'Morris II' that can steal data, spread malware, and spam others via an email client.
- The worm targets AI apps and AI-enabled email assistants that generate text and images using models like Gemini Pro, ChatGPT 4.0, and LLaVA.
- The researchers demonstrated that attackers can use the worm to mine confidential information, including credit card details and social security numbers.
- The team reported their findings to Google and OpenAI, with OpenAI acknowledging the vulnerability and stating that they are working on making their systems more resilient.