Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Hacking Google Bard - From Prompt Injection to Data Exfiltration · Embrace The Red

Nov 13, 2023 - embracethered.com
The article discusses a vulnerability in Google Bard, a language learning model (LLM), that allows for Indirect Prompt Injection attacks via emails or Google Docs. This vulnerability can be exploited by an attacker to access and analyze a user's Drive, Docs, and Gmail without their consent. The author demonstrates how they were able to exploit this vulnerability by using a prompt injection payload to read the history of a conversation and form a hyperlink that contained it. However, image rendering was blocked by Google’s Content Security Policy (CSP).

The author then discusses how they bypassed the CSP using Google Apps Script, which allowed them to render images from an attacker-controlled server. They created a "Bard Logger" in Apps Script that writes all query parameters appended to the invocation URL to a Google Doc, which is the exfiltration destination. The author concludes by stating that Google has fixed the issue, but it is unclear how they did so as the CSP was not modified and images still render. The author suggests that Google may have put some filtering in place to prevent insertion of data into the URL.

Key takeaways:

  • Google Bard, a language learning model, was found to be vulnerable to Indirect Prompt Injection attacks, potentially allowing unauthorized access to user's personal documents and emails.
  • The vulnerability was exploited by using Google Bard's ability to render images and the power of the language learning model to access previous data in the chat context.
  • A bypass for Google's Content Security Policy was found using Google Apps Script, allowing for the creation of a "Bard Logger" that could write all query parameters to a Google Doc.
  • The issue was reported to Google and confirmed fixed within a month, although it's not entirely clear what the fix entailed.
View Full Article

Comments (0)

Be the first to comment!