The author expresses concern about these privacy issues and speculates that the European Union might intervene, but it could take time. They compare the situation to past privacy issues with Facebook and suggest that OpenAI's CEO, Sam Altman, is pushing boundaries. The author also notes that when users turn off the "Chat history & training" setting, the chat sometimes clears itself, which they suspect might be intentional to annoy privacy-conscious users. The article ends with a note that OpenAI's models still include training data 1:1.
Key takeaways:
- ChatGPT by default uses personal information shared by users to train future versions of the AI.
- The setting that allows ChatGPT to use your chats for training is automatically turned on and is only saved on your device, not OpenAI's servers, making it hard to prove if you've disabled it.
- There's a feature where you can give personal instructions with each message, which is not affected by the 'Chat history & training' privacy setting. To disable this, you have to email OpenAI.
- Turning off the 'Chat history & training' setting sometimes results in the chat clearing itself, which could potentially be an intentional move to discourage users from prioritizing privacy.