The research underscores the vulnerability of AI models to side-channel attacks, where electromagnetic emissions are used to infer sensitive information. While this method requires physical access to the chip, it raises concerns about the security of AI models deployed on edge devices and servers that are not physically secured. Mehmet Sencan, a security researcher, noted that this approach of extracting entire model architectures is significant, as AI hardware performs inference in plaintext, making it susceptible to such probing. The study also involved collaboration with Google to assess the attackability of its chips, and researchers speculate that capturing models on smartphones could be feasible, albeit more challenging due to their compact design.
Key takeaways:
- Researchers at North Carolina State University have developed a technique to extract AI model architecture by analyzing electromagnetic signatures from a TPU chip.
- This method requires physical access to the chip and involves comparing electromagnetic data from different AI models to determine specific characteristics with high accuracy.
- Theft of AI models is a growing concern, as it allows unauthorized parties to use and potentially sell proprietary models without incurring the original development costs.
- While side channel attacks on edge devices are not new, this technique's ability to extract entire model architecture hyperparameters is considered significant, highlighting security vulnerabilities in AI hardware.