Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

EU privacy body weighs in on some tricky GenAI lawfulness questions | TechCrunch

Dec 18, 2024 - techcrunch.com
The European Data Protection Board (EDPB) has issued an opinion on the use of personal data in AI model development, addressing key issues such as model anonymity, legitimate interest as a legal basis for data processing, and the deployment of AI models trained on unlawfully processed data. The opinion emphasizes that AI models must be assessed on a case-by-case basis to determine if they can be considered anonymous, which would exempt them from privacy laws. It also explores the possibility of using legitimate interest as a legal basis for processing personal data, requiring a three-step test to balance the purpose, necessity, and impact on individual rights. The EDPB suggests that AI models could potentially meet these criteria, but stresses that there is no one-size-fits-all solution.

The opinion also addresses the issue of AI models trained on unlawfully processed data, suggesting that if developers can ensure personal data is anonymized before deployment, the GDPR may not apply to the model's operation. This stance has raised concerns about potentially legitimizing data scraping without proper legal bases. The EDPB's guidance aims to assist regulators in applying GDPR rules to AI technologies, while also providing developers with insights into regulatory expectations. However, the opinion leaves room for interpretation and emphasizes the need for tailored assessments based on individual circumstances.

Key takeaways:

  • The EDPB opinion explores how AI developers can use personal data for AI models without violating EU privacy laws, focusing on model anonymity, legitimate interest, and lawfully deploying models trained on unlawfully processed data.
  • Model anonymity must be assessed on a case-by-case basis, with developers encouraged to use privacy-preserving techniques to minimize identifiability risks.
  • Legitimate interest could be a viable legal basis for AI development, but it requires a thorough assessment of the processing's purpose, necessity, and impact on individual rights.
  • AI models trained on unlawfully processed data might still be deployed lawfully if developers ensure anonymization before deployment, though this approach raises concerns about potential misuse.
View Full Article

Comments (0)

Be the first to comment!