The attack is passive and can occur without the knowledge of OpenAI or their client. While OpenAI encrypts their traffic to prevent eavesdropping, the encryption method is flawed, exposing the content of the messages. All major chatbots, except Google Gemini, are affected by this vulnerability. The researchers have termed this vulnerability as the "token-length sequence" side channel, which is exposed due to the real-time, token-by-token transmission of responses by AI assistants.
Key takeaways:
- All non-Google chat GPTs are affected by a side channel that leaks responses sent to users, potentially allowing hackers to read private AI assistant chats even though they’re encrypted.
- The attack can decipher AI assistant responses with surprising accuracy, exploiting a side channel present in all of the major AI assistants, except Google Gemini.
- Someone with a passive adversary-in-the-middle position can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.
- The side channel used in this latest attack resides in tokens that AI assistants use when responding to a user query. While the token delivery is encrypted, the real-time, token-by-token transmission exposes a previously unknown side channel, which the researchers call the “token-length sequence.”