Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Data Exfiltration from Slack AI via indirect prompt injection

Aug 21, 2024 - news.bensbites.com
The article discusses a vulnerability in Slack's AI feature that could allow attackers to steal information from private Slack channels. The vulnerability arises from the language model used for content generation, which cannot distinguish between a system prompt created by a developer and the rest of the context appended to the query. This means that if Slack AI ingests a malicious instruction via a message, it's likely to follow that instruction, potentially leading to data exfiltration. The issue was responsibly disclosed to Slack.

The article outlines two potential attack chains: data exfiltration and phishing. In both cases, an attacker could create a public channel, post a malicious instruction, and manipulate Slack AI to exfiltrate data or render a phishing link. The article also highlights the increased risk following a change to Slack AI on August 14th, which now includes files from channels and DMs in its answers. The author suggests that administrators should restrict Slack AI’s ability to ingest documents until the issue is resolved.

Key takeaways:

  • A vulnerability in Slack's AI feature can allow attackers to steal information from private channels by manipulating the language model used for content generation.
  • The issue stems from 'prompt injection', where the AI cannot distinguish between a system prompt created by a developer and the rest of the context appended to the query, leading it to potentially follow malicious instructions.
  • Attackers can use this vulnerability to exfiltrate data or launch phishing attacks, even without having access to the private channel or data within Slack.
  • Slack introduced a change on August 14th to include files from channels and DMs into Slack AI answers, which potentially increases the risk surface area for these types of attacks.
View Full Article

Comments (0)

Be the first to comment!