The Cato Networks report underscores the effectiveness of the immersive world technique and the functionality of the generated malicious code. The researchers contacted AI tool providers involved in the study, including Microsoft, OpenAI, and DeepSeek, to disclose the threat. While Microsoft and OpenAI acknowledged the disclosure, Google declined to review the code. The article emphasizes the growing threat posed by AI-generated malware and the need for enhanced security measures to counteract such innovative hacking techniques.
Key takeaways:
- Infostealer malware is on the rise, with 2.1 billion credentials compromised and 85 million newly stolen passwords used in attacks.
- Hackers can use a large language model jailbreak technique, known as an immersive world attack, to create infostealer malware.
- A threat intelligence researcher with no coding experience managed to jailbreak multiple large language models to create a password infostealer for Google Chrome.
- The immersive world attack uses narrative engineering to bypass LLM security guardrails, creating a fictional world to normalize restricted operations.