Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI systems with 'unacceptable risk' are now banned in the EU | TechCrunch

Feb 02, 2025 - techcrunch.com
The European Union's AI Act, which went into force on August 1, introduces a regulatory framework categorizing AI systems into four risk levels: minimal, limited, high, and unacceptable. The Act prohibits AI applications deemed to pose "unacceptable risk," such as those used for social scoring, subliminal manipulation, exploiting vulnerabilities, predicting crimes based on appearance, and collecting real-time biometric data for law enforcement. Companies violating these prohibitions could face fines up to €35 million or 7% of their annual revenue. The first compliance deadline is February 2, with full enforcement, including fines, expected by August. Over 100 companies, including Amazon, Google, and OpenAI, have already pledged to comply with the Act's principles, although some major players like Meta and Apple have not.

The Act allows certain exemptions, such as law enforcement using biometric systems for targeted searches or preventing imminent threats, and systems inferring emotions for medical or safety reasons in workplaces and schools. These exemptions require authorization from governing bodies. The European Commission plans to release additional guidelines in early 2025. The AI Act's interaction with other EU laws, such as GDPR, NIS2, and DORA, may present challenges, particularly regarding overlapping incident notification requirements. Organizations are advised to understand how these legal frameworks fit together to ensure compliance.

Key takeaways:

  • The EU AI Act categorizes AI systems into four risk levels, with unacceptable risk applications being prohibited entirely.
  • Companies using prohibited AI applications in the EU could face fines of up to €35 million or 7% of their annual revenue.
  • The February 2 compliance deadline is a formality, with full enforcement and fines expected to take effect in August.
  • Exceptions exist for certain AI applications, such as law enforcement use of biometrics, but require authorization and cannot solely produce adverse legal effects.
View Full Article

Comments (0)

Be the first to comment!