Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Mitigating a token-length side-channel attack in our AI products

Mar 14, 2024 - blog.cloudflare.com
Cloudflare has implemented a mitigation strategy to counter a token-length side-channel attack in its AI products, following a paper by researchers at Ben Gurion University. The paper detailed a novel side-channel that could be used to read encrypted responses from AI Assistants over the web. The attack method involved intercepting the stream of a chat session with an LLM provider and using the network packet headers to infer the length of each token. Cloudflare's mitigation strategy obscures token size by padding token responses with random length noise, thereby preventing responses from being inferred from the packets.

The mitigation has been added to Cloudflare's inference product, Workers AI, and its AI Gateway, which proxies requests to any provider. This means all users of these services are now automatically protected from this side-channel attack. The company has not seen any malicious attacks exploiting this vulnerability, other than the ethical testing from the researchers. It also added that no modifications are required in the SDK or the client code, the changes are invisible to the end-users, and no action is required from the customers.

Key takeaways:

  • Cloudflare has implemented a mitigation strategy to secure all Workers AI and AI Gateway customers from a token-length side-channel attack.
  • The attack method involves intercepting the stream of a chat session with an LLM provider and using network packet headers to infer the length of each token.
  • Cloudflare's solution involves adding a new property, 'p' (for padding), that has a string value of variable random length to the JSON objects, thereby obscuring the token size and preventing information inference based on network packet size.
  • Cloudflare's AI Gateway, which acts as a proxy between a user and supported inference providers, has also been updated to automatically protect against this side-channel attack, even if the upstream inference providers have not yet mitigated the vulnerability.
View Full Article

Comments (0)

Be the first to comment!