Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Email used to log in for ChatGPT at potential risk, research team unveils OpenAI's GPT-3.5 Turbo threat

Dec 26, 2023 - businesstoday.in
A study led by Rui Zhu, a PhD candidate at Indiana University Bloomington, has revealed a potential privacy threat associated with OpenAI's language model, GPT-3.5 Turbo. Zhu used the model to contact individuals, including New York Times staff, using email addresses obtained from the AI, exploiting its ability to recall personal data and bypass privacy safeguards. The model accurately provided work addresses for 80% of the Times employees tested, raising concerns about AI tools like ChatGPT disclosing sensitive information.

OpenAI's language models, including GPT-3.5 Turbo and GPT-4, are designed for continual learning, and Zhu manipulated the model's fine-tuning interface to bypass security measures. Despite efforts by OpenAI, Meta, and Google to prevent requests for personal information, researchers have found ways to circumvent these safeguards. The incident has raised broader concerns about privacy in extensive language models, with critics calling for increased transparency and stronger protections to safeguard sensitive information in AI models.

Key takeaways:

  • A study led by Rui Zhu, a PhD candidate at Indiana University Bloomington, has found a potential privacy threat linked to OpenAI's language model, GPT-3.5 Turbo. The model was used to contact individuals, including personnel from The New York Times, using email addresses obtained from the AI.
  • The experiment exploited GPT-3.5 Turbo's ability to recall personal data, bypassing its usual privacy safeguards. The model accurately provided work addresses for 80 percent of the Times employees tested, raising concerns about the potential for AI tools to disclose sensitive information.
  • The researchers manipulated the model's fine-tuning interface, which is designed for users to enhance its knowledge in specific domains, to bypass the tool's security measures. Despite efforts by OpenAI, Meta, and Google to prevent requests for personal information, researchers have found ways to circumvent these safeguards.
  • OpenAI has responded to these concerns, emphasizing their commitment to safety and their stance against requests for private data. However, experts have expressed skepticism due to the lack of transparency surrounding the model's specific training data and the potential dangers associated with AI models storing private information.
View Full Article

Comments (0)

Be the first to comment!