The usage restriction follows a recent revelation by Google's DeepMind researchers that asking ChatGPT to repeat words "forever" can expose some of the chatbot's internal training data. In one instance, the AI produced what appeared to be a real email address and phone number when asked to repeat the word "poem" indefinitely. The researchers managed to extract over 10,000 unique verbatim memorized training examples using only $200 worth of queries.
Key takeaways:
- OpenAI's ChatGPT refuses to repeat specific words indefinitely, even when prompted to do so.
- ChatGPT's refusal is based on technical limitations, practicality and purpose, and user experience, as it aims to provide useful, relevant, and meaningful responses rather than spammy or unhelpful ones.
- Researchers from Google's DeepMind found a vulnerability in ChatGPT's language model that revealed some of the chatbot's internal training data when asked to repeat specific words "forever".
- This isn't the first time a generative AI chatbot has revealed what appears to be confidential information, with Google's AI chatbot, Bard, disclosing its backend name in February.