Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

What will the EU’s proposed act to regulate AI mean for consumers?

Mar 14, 2024 - theguardian.com
The European Parliament has endorsed the EU's proposed AI act, marking a significant step towards regulating AI technology. The legislation, which now requires approval from EU member state ministers, will be implemented over three years and aims to ensure AI tools are safe and trustworthy. The act defines AI as a "machine-based system designed to operate with varying levels of autonomy" and covers a wide range of AI tools. It bans systems that pose an "unacceptable risk" but exempts those used for military, defense, or national security purposes, as well as those used in scientific research and innovation.

The act also addresses the risks posed by AI, prohibiting systems that manipulate people to cause harm, "social scoring" systems, predictive policing, and biometric categorisation systems, among others. It allows for certain exemptions for law enforcement, such as using real-time biometric identification systems to find missing persons or prevent terror attacks. The act also categorises "high risk" systems that will be legal but closely monitored, and includes provisions for "general-purpose" AI systems. The legislation has received mixed responses from tech companies, with some warning against overregulation and others expressing concern over the act's computing power limits for training AI models. The act includes fines for breaches, ranging from €7.5m or 1.5% of a company’s total worldwide turnover to €35m, or 7% of turnover, for deploying or developing banned AI tools.

Key takeaways:

  • The European Parliament has endorsed the EU's proposed AI act, which is a significant step in regulating AI technology. The act will be implemented over three years and aims to ensure that AI tools are safe and trustworthy.
  • The act provides a detailed definition of AI and covers a wide range of AI tools, including chatbots and systems that sift through job applications. It bans systems that pose an 'unacceptable risk' but exempts AI tools designed for military, defence, national security, and scientific research.
  • The act also tackles the risks posed by AI, prohibiting certain systems such as those that manipulate people to cause harm, 'social scoring' systems, and 'biometric categorisation' systems. It also has provisions for 'high risk' systems that will be legal but closely observed.
  • The act has received a mixed response from tech companies, with some warning against overregulation and others expressing concern about the limit set for the computing power used to train AI models. The act also outlines punishments for breaches, ranging from fines to the prohibition of certain AI tools.
View Full Article

Comments (0)

Be the first to comment!