The article outlines two potential attack chains: data exfiltration and phishing. In both cases, an attacker could create a public channel, post a malicious instruction, and manipulate Slack AI to exfiltrate data or render a phishing link. The article also highlights the increased risk following a change to Slack AI on August 14th, which now includes files from channels and DMs in its answers. The author suggests that administrators should restrict Slack AI’s ability to ingest documents until the issue is resolved.
Key takeaways:
- A vulnerability in Slack's AI feature can allow attackers to steal information from private channels by manipulating the language model used for content generation.
- The issue stems from 'prompt injection', where the AI cannot distinguish between a system prompt created by a developer and the rest of the context appended to the query, leading it to potentially follow malicious instructions.
- Attackers can use this vulnerability to exfiltrate data or launch phishing attacks, even without having access to the private channel or data within Slack.
- Slack introduced a change on August 14th to include files from channels and DMs into Slack AI answers, which potentially increases the risk surface area for these types of attacks.