1
Feature Story
AI systems with 'unacceptable risk' are now banned in the EU | TechCrunch
Feb 02, 2025 · techcrunch.com
The Act allows certain exemptions, such as law enforcement using biometric systems for targeted searches or preventing imminent threats, and systems inferring emotions for medical or safety reasons in workplaces and schools. These exemptions require authorization from governing bodies. The European Commission plans to release additional guidelines in early 2025. The AI Act's interaction with other EU laws, such as GDPR, NIS2, and DORA, may present challenges, particularly regarding overlapping incident notification requirements. Organizations are advised to understand how these legal frameworks fit together to ensure compliance.
Key takeaways
- The EU AI Act categorizes AI systems into four risk levels, with unacceptable risk applications being prohibited entirely.
- Companies using prohibited AI applications in the EU could face fines of up to €35 million or 7% of their annual revenue.
- The February 2 compliance deadline is a formality, with full enforcement and fines expected to take effect in August.
- Exceptions exist for certain AI applications, such as law enforcement use of biometrics, but require authorization and cannot solely produce adverse legal effects.