The taskforce also highlighted the importance of informing users that their inputs may be used for training purposes and the need for OpenAI to provide accurate information about the chatbot's output. It also emphasized that OpenAI remains responsible for complying with the GDPR and cannot transfer privacy risk to the user. The taskforce was set up in 2023 to streamline enforcement of the bloc's privacy rules on the technology. However, its existence may be delaying decisions and investigations into complaints about the chatbot.
Key takeaways:
- A data protection taskforce has been considering how the European Union’s data protection rules apply to OpenAI’s chatbot, ChatGPT, but remains undecided on key legal issues such as the lawfulness and fairness of OpenAI’s processing.
- OpenAI could face significant penalties for confirmed violations of the EU's privacy regime, including a stop to non-compliant processing and fines of up to 4% of global annual turnover.
- OpenAI has switched to claiming it has a legitimate interest for processing personal data used for model training, but a draft decision from Italy's data protection authority found OpenAI had violated the GDPR.
- The taskforce has suggested OpenAI could use "adequate safeguards" to change the balancing test in favor of the controller, potentially forcing AI companies to be more careful about how and what data they collect to limit privacy risks.