Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

ChatGPT will no longer comply if you ask it to repeat a word 'forever'— after a recent prompt revealed training data and personal info

Dec 04, 2023 - businessinsider.com
OpenAI's AI chatbot, ChatGPT, has been programmed to refuse requests to repeat specific words indefinitely, according to a report by 404 Media. The refusal is in line with OpenAI's usage policies, which were last updated on March 23. The AI cited technical limitations, practicality and purpose, and user experience as reasons for the refusal. It stated that its model isn't designed for continuous, unending tasks and that such requests don't align with its purpose of providing useful and meaningful responses.

The usage restriction follows a recent revelation by Google's DeepMind researchers that asking ChatGPT to repeat words "forever" can expose some of the chatbot's internal training data. In one instance, the AI produced what appeared to be a real email address and phone number when asked to repeat the word "poem" indefinitely. The researchers managed to extract over 10,000 unique verbatim memorized training examples using only $200 worth of queries.

Key takeaways:

  • OpenAI's ChatGPT refuses to repeat specific words indefinitely, even when prompted to do so.
  • ChatGPT's refusal is based on technical limitations, practicality and purpose, and user experience, as it aims to provide useful, relevant, and meaningful responses rather than spammy or unhelpful ones.
  • Researchers from Google's DeepMind found a vulnerability in ChatGPT's language model that revealed some of the chatbot's internal training data when asked to repeat specific words "forever".
  • This isn't the first time a generative AI chatbot has revealed what appears to be confidential information, with Google's AI chatbot, Bard, disclosing its backend name in February.
View Full Article

Comments (0)

Be the first to comment!