Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Hackers can read private AI assistant chats even though they’re encrypted

Mar 14, 2024 - arstechnica.com
Researchers have discovered a side channel in AI chatbots that can leak encrypted responses sent to users, potentially exposing private conversations. The attack can decipher AI assistant responses with high accuracy, exploiting a side channel present in all major AI assistants, except Google Gemini. The attacker, who can monitor data packets passing between an AI assistant and the user, can infer the specific topic of 55% of all captured responses, with perfect word accuracy 29% of the time.

The attack is passive and can occur without the knowledge of OpenAI or their client. While OpenAI encrypts their traffic to prevent eavesdropping, the encryption method is flawed, exposing the content of the messages. All major chatbots, except Google Gemini, are affected by this vulnerability. The researchers have termed this vulnerability as the "token-length sequence" side channel, which is exposed due to the real-time, token-by-token transmission of responses by AI assistants.

Key takeaways:

  • All non-Google chat GPTs are affected by a side channel that leaks responses sent to users, potentially allowing hackers to read private AI assistant chats even though they’re encrypted.
  • The attack can decipher AI assistant responses with surprising accuracy, exploiting a side channel present in all of the major AI assistants, except Google Gemini.
  • Someone with a passive adversary-in-the-middle position can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.
  • The side channel used in this latest attack resides in tokens that AI assistants use when responding to a user query. While the token delivery is encrypted, the real-time, token-by-token transmission exposes a previously unknown side channel, which the researchers call the “token-length sequence.”
View Full Article

Comments (0)

Be the first to comment!