One model was of particular concern as it opened a reverse shell that gave a remote device on the internet full control of the end user’s device. This model was submitted by a user named baller432 and was able to evade Hugging Face’s malware scanner by using pickle's "__reduce__" method to execute arbitrary code after loading the model file. This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage. Hugging Face has since removed the model and the others flagged by JFrog.
Key takeaways:
- Malicious code was covertly installed on end-user machines through the AI developer platform Hugging Face, according to a report by security firm JFrog.
- Out of roughly 100 submissions that performed hidden and unwanted actions, 10 were found to be 'truly malicious', compromising users' security when loaded.
- One model opened a reverse shell, giving a remote device full control of the end user's device, a major breach of researcher ethics.
- The malicious models used pickle, a format recognized as inherently risky, to sneak in malicious code. The model that spawned the reverse shell was submitted by a user named baller432 and was able to evade Hugging Face's malware scanner.