This incident, along with similar ones, underscores the need to remove personal information from interactions with ChatGPT and other AI platforms. In a related event last March, OpenAI temporarily shut down the service due to a glitch that displayed chat history titles from one user to unrelated users. In November, a research paper detailed how investigators were able to extract private information from ChatGPT through specific queries.
Key takeaways:
- OpenAI's chatbot, ChatGPT, has been involved in privacy breaches, including disclosing private information such as login details and personal data from third parties.
- One incident involved an employee using ChatGPT to troubleshoot a pharmacy's prescription drug portal, inadvertently exposing sensitive information including username and password combinations.
- OpenAI had previously temporarily shut down ChatGPT in response to a glitch that displayed chat history titles from one active user to unrelated users.
- A research paper released in November detailed how investigators were able to extract personal details from ChatGPT's training materials, highlighting the need to remove personal information from interactions with AI platforms.