Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Criminals Have Created Their Own ChatGPT Clones

Aug 08, 2023 - wired.com
The article discusses the potential use of language learning models (LLMs) by cybercriminals for fraudulent activities. The creators of two such systems, FraudGPT and WormGPT, have allegedly been trying to sell access to these chatbots, with the latter receiving payments into a cryptocurrency wallet. However, the legitimacy of these systems is hard to verify, and there is skepticism about their effectiveness compared to commercial LLMs. Despite this, there is concern that cybercriminals could use LLMs to enhance their scamming capabilities.

The FBI and Europol have warned about the potential use of generative AI by cybercriminals for fraud and impersonation. Scammers have already tricked people into downloading malware through fake ads for generative AI systems on social media. There have also been instances of cybercriminals sharing jailbreaks to bypass safety restrictions on popular LLMs. However, even these unconstrained versions may not be very useful for cybercriminals in their current form.

Key takeaways:

  • The creators of chatbot systems FraudGPT and WormGPT are allegedly selling access to their systems, with claims of the chatbots being able to generate scam emails.
  • While the existence and legitimacy of these systems are hard to verify, there are indications that people are using WormGPT, according to Sergey Shykevich from security firm Check Point.
  • Law enforcement agencies like the FBI and Europol have warned that cybercriminals could potentially use generative AI like LLMs for fraud, impersonation, and other social engineering tactics.
  • Despite the potential risks, unconstrained versions of these models may not be very useful for cybercriminals in their current form, as they have not been seen to be more effective than an average developer.
View Full Article

Comments (0)

Be the first to comment!