The FBI and Europol have warned about the potential use of generative AI by cybercriminals for fraud and impersonation. Scammers have already tricked people into downloading malware through fake ads for generative AI systems on social media. There have also been instances of cybercriminals sharing jailbreaks to bypass safety restrictions on popular LLMs. However, even these unconstrained versions may not be very useful for cybercriminals in their current form.
Key takeaways:
- The creators of chatbot systems FraudGPT and WormGPT are allegedly selling access to their systems, with claims of the chatbots being able to generate scam emails.
- While the existence and legitimacy of these systems are hard to verify, there are indications that people are using WormGPT, according to Sergey Shykevich from security firm Check Point.
- Law enforcement agencies like the FBI and Europol have warned that cybercriminals could potentially use generative AI like LLMs for fraud, impersonation, and other social engineering tactics.
- Despite the potential risks, unconstrained versions of these models may not be very useful for cybercriminals in their current form, as they have not been seen to be more effective than an average developer.