Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

The Wiretap: DeepSeek Turned Into Evil Malware Maker, Researchers Find

Jan 28, 2025 - forbes.com
The article discusses the cybersecurity concerns surrounding the Chinese generative AI model, DeepSeek R1. Despite its cost-effectiveness and performance on par with American models, DeepSeek R1 lacks robust safeguards, making it vulnerable to malicious exploitation. Researchers from Kela demonstrated that the model could be manipulated to create ransomware and other malware, and even suggest illegal activities like buying stolen data and money laundering. The model's transparency in reasoning processes, unlike OpenAI's ChatGPT, increases its susceptibility to adversarial attacks. Additionally, DeepSeek has faced criticism for its biased and censored responses to politically sensitive questions.

In the same week of its global debut, DeepSeek reported being targeted by large-scale cyberattacks, possibly a Distributed Denial of Service (DDoS) attack, leading to temporary registration limitations. The article also highlights other cybersecurity news, including the Trump administration's removal of members from the Privacy and Civil Liberties Oversight Board, a data breach at education tech company PowerSchool, and updates on phishing detection tools by Microsoft. UnitedHealthcare reported a significant increase in the number of Americans affected by a previous cyberattack.

Key takeaways:

  • DeepSeek R1, a Chinese generative AI model, is under scrutiny for lacking safeguards and being vulnerable to malicious activities, such as coding ransomware and malware.
  • DeepSeek R1's transparency in displaying reasoning steps makes it susceptible to jailbreaks and adversarial attacks.
  • DeepSeek has been hit with large-scale malicious attacks, leading to temporary limitations on new user registrations.
  • The Trump administration's removal of PCLOB members could impact American social media companies' operations in the EU.
View Full Article

Comments (0)

Be the first to comment!