Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue

Apr 04, 2025 - venturebeat.com
The article discusses the rise of weaponized large language models (LLMs) that are being fine-tuned for offensive cyber operations, significantly impacting cybersecurity strategies. These models, such as FraudGPT, GhostGPT, and DarkGPT, are available for as little as $75 a month and are used for phishing, exploit generation, and other malicious activities. The sophistication of these weaponized LLMs is blurring the lines between legitimate developer platforms and cybercrime kits, leading to an increase in AI-driven threats. Fine-tuning LLMs, while improving task performance, also destabilizes safety controls, making them more susceptible to attacks like jailbreaks and malicious output generation.

Cisco's research highlights the vulnerabilities introduced by fine-tuning LLMs, especially in sensitive domains like healthcare and law. The study reveals that fine-tuned models are significantly more likely to produce harmful outputs than base models. Additionally, the article addresses the threat of dataset poisoning, where attackers can inject malicious data into open-source training sets for as little as $60, and decomposition attacks that extract copyrighted content without triggering guardrails. The findings emphasize the need for stronger security measures and real-time visibility to protect against these evolving threats, as LLMs become a critical attack surface in enterprise environments.

Key takeaways:

  • Weaponized large language models (LLMs) like FraudGPT, GhostGPT, and DarkGPT are being used for cyberattacks, with leasing prices as low as $75 a month, and are packaged similarly to legitimate SaaS applications.
  • Fine-tuning LLMs increases their vulnerability to attacks, as it weakens safety controls and makes them more susceptible to producing harmful outputs, with a 22-fold increase in risk compared to base models.
  • Data poisoning attacks can be executed for as little as $60, allowing adversaries to inject malicious data into open-source training sets, potentially influencing downstream LLMs and compromising AI supply chains.
  • Decomposition attacks can extract copyrighted and regulated content from LLMs without triggering guardrails, posing significant compliance risks for enterprises in regulated sectors like healthcare, finance, and legal.
View Full Article

Comments (0)

Be the first to comment!