Email security provider SlashNext tested WormGPT and found it could generate persuasive and strategically cunning messages suitable for business email compromise (BEC) attacks. The use of generative AI in such attacks can provide impeccable grammar and enable less skilled attackers to execute sophisticated attacks. To defend against this emerging threat, companies are advised to train employees on message verification, especially for urgent financial requests, and to improve email verification processes.
Key takeaways:
- Threat actors are showing increased interest in generative AI tools, with hundreds of thousands of OpenAI credentials for sale on the dark web, and a malicious alternative for ChatGPT being developed and advertised.
- Researchers have identified over 200,000 OpenAI credentials for sale on the dark web, indicating that cybercriminals see potential for malicious activity in generative AI tools.
- A ChatGPT clone named WormGPT has been developed and trained on malware-focused data, advertised as a tool for carrying out illegal activities.
- Tests carried out by email security provider SlashNext reveal that WormGPT has the potential to create highly persuasive and strategically cunning messages for business email compromise (BEC) attacks, highlighting the need for improved email verification processes and employee training.