Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

How to Steal an AI Model Without Actually Hacking Anything

Dec 28, 2024 - gizmodo.com
Researchers at North Carolina State University have developed a method to extract the architecture and hyperparameters of AI models by analyzing their electromagnetic signatures. Using an electromagnetic probe, pre-trained open-source AI models, and a Google Edge Tensor Processing Unit (TPU), they were able to achieve 99.91% accuracy in replicating AI models. This technique involves comparing electromagnetic data from a TPU chip running AI models to identify specific characteristics necessary for duplication. The study highlights the potential for intellectual property theft in AI, as models like ChatGPT, which consist of billions of parameters, could be copied without the original developers' consent.

The research underscores the vulnerability of AI models to side-channel attacks, where electromagnetic emissions are used to infer sensitive information. While this method requires physical access to the chip, it raises concerns about the security of AI models deployed on edge devices and servers that are not physically secured. Mehmet Sencan, a security researcher, noted that this approach of extracting entire model architectures is significant, as AI hardware performs inference in plaintext, making it susceptible to such probing. The study also involved collaboration with Google to assess the attackability of its chips, and researchers speculate that capturing models on smartphones could be feasible, albeit more challenging due to their compact design.

Key takeaways:

  • Researchers at North Carolina State University have developed a technique to extract AI model architecture by analyzing electromagnetic signatures from a TPU chip.
  • This method requires physical access to the chip and involves comparing electromagnetic data from different AI models to determine specific characteristics with high accuracy.
  • Theft of AI models is a growing concern, as it allows unauthorized parties to use and potentially sell proprietary models without incurring the original development costs.
  • While side channel attacks on edge devices are not new, this technique's ability to extract entire model architecture hyperparameters is considered significant, highlighting security vulnerabilities in AI hardware.
View Full Article

Comments (0)

Be the first to comment!