The law also establishes a new governance architecture for AI, including an enforcement body within the European Commission called the AI Office, an AI Board, a scientific panel for oversight, and an advisory forum for technical expertise. Standards bodies will play a key role in determining what’s demanded of AI app developers. The law encourages setting up regulatory sandboxes for development and real-world testing of novel AI applications. Existing laws such as copyright legislation, the GDPR, the bloc’s online governance regime and various competition laws may also apply to AI developers.
Key takeaways:
- The European Union has given final approval to the EU AI Act, a ground-breaking set of risk-based regulations for artificial intelligence, which is the first of its kind in the world and could set a global standard for AI regulation.
- The new law will be implemented in phases, with some provisions only applicable after two years or longer. It adopts a risk-based approach to regulating AI, banning certain "unacceptable risk" use-cases and defining a set of "high risk" uses.
- The law establishes a new governance architecture for AI, including an enforcement body within the European Commission called the AI Office, an AI Board comprising representatives from EU member states, a scientific panel for oversight, and an advisory forum for technical expertise.
- While the EU AI Act is the bloc's first comprehensive regulation for artificial intelligence, AI developers may already be subject to existing laws such as copyright legislation, the GDPR, the bloc's online governance regime and various competition laws.