The opinion also addresses the issue of AI models trained on unlawfully processed data, suggesting that if developers can ensure personal data is anonymized before deployment, the GDPR may not apply to the model's operation. This stance has raised concerns about potentially legitimizing data scraping without proper legal bases. The EDPB's guidance aims to assist regulators in applying GDPR rules to AI technologies, while also providing developers with insights into regulatory expectations. However, the opinion leaves room for interpretation and emphasizes the need for tailored assessments based on individual circumstances.
Key takeaways:
- The EDPB opinion explores how AI developers can use personal data for AI models without violating EU privacy laws, focusing on model anonymity, legitimate interest, and lawfully deploying models trained on unlawfully processed data.
- Model anonymity must be assessed on a case-by-case basis, with developers encouraged to use privacy-preserving techniques to minimize identifiability risks.
- Legitimate interest could be a viable legal basis for AI development, but it requires a thorough assessment of the processing's purpose, necessity, and impact on individual rights.
- AI models trained on unlawfully processed data might still be deployed lawfully if developers ensure anonymization before deployment, though this approach raises concerns about potential misuse.