The Act also includes provisions for general purpose AI models (GPAIs), which underpin many AI applications. Commercial GPAIs are subject to transparency rules, while those with "systemic risk" require proactive risk assessment and mitigation. The Act officially came into force on August 1, 2024, with staggered compliance deadlines extending to mid-2027. Oversight of the Act's rules is decentralized, with enforcement at the member state level and penalties for non-compliance reaching up to 7% of global turnover.
Key takeaways:
- The EU AI Act is a risk-based rulebook for artificial intelligence, aiming to foster trust among citizens and ensure AI technologies remain human-centered while also providing clear rules for businesses.
- The Act sets up a hierarchy of potential use cases, with some carrying 'unacceptable risk' and therefore banned, while others are categorized as 'high-risk' or 'medium-risk', each with their own set of regulations.
- For 'high-risk' AI applications, developers must conduct conformity assessments prior to market deployment, demonstrating compliance in areas such as data quality, transparency, human oversight, accuracy, cybersecurity, and robustness.
- The AI Act officially entered into force across the EU on August 1, 2024, with compliance deadlines set to hit at different intervals from early 2025 until around the middle of 2027.