Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns

Dec 22, 2023 - nytimes.com
The article discusses the potential privacy risks associated with large language models (L.L.M.s) like OpenAI's GPT-3.5 Turbo. Researchers from Indiana University Bloomington were able to extract a list of business and personal email addresses of New York Times employees from the AI model, demonstrating that it could potentially reveal sensitive personal information. The researchers used a process called fine-tuning, which is intended to give the AI more knowledge about a specific area, but can also be used to bypass some of the model's defenses against revealing private information.

The article also highlights that OpenAI, along with other AI companies, cannot guarantee that their models have not learned sensitive information. This is particularly concerning as no one, apart from a limited number of OpenAI employees, really knows what information is in the AI's training-data memory. The company does use natural language texts from many different public sources, including websites, and licenses input data from third parties, but is secretive about what specific information it uses.

Key takeaways:

  • Researchers have managed to extract personal and business email addresses from GPT-3.5 Turbo, a large language model from OpenAI, by bypassing the model's restrictions on responding to privacy-related queries.
  • While OpenAI and other companies have safeguards in place to prevent users from asking for personal information, researchers have found ways to bypass these protections.
  • The vulnerability is concerning as no one, apart from a limited number of OpenAI employees, knows what information lurks in ChatGPT’s training-data memory.
  • OpenAI uses natural language texts from many different public sources, including websites, and also licenses input data from third parties, which could potentially include sensitive information.
View Full Article

Comments (0)

Be the first to comment!