High-risk systems will have longer to comply, with autonomous systems getting 24 months and embedded systems in medical devices getting three years. The proposed AI Act has faced criticism for potentially hindering research, but regulatory sandboxes will allow for development outside the strict provisions of the legislation, provided they are approved by the authorities. This includes real-world testing for a period of six months, extendable by another six months if necessary.
Key takeaways:
- Organizations using AI systems may have up to a year to comply with the incoming European legislation, with the timeline being tighter for providers of high-risk systems.
- The legislation proposes a pyramid system where some categories of AI will be expected to comply sooner than others, with prohibited uses expected to comply six months after the date of entry comes into force.
- General-purpose AI models will need to meet certain criteria to comply with the law, including conducting model evaluations, assessing and mitigating systemic risks, and reporting serious incidents to the European Commission.
- The proposed AI Act has faced criticism for burdening research, but regulatory sandboxes would allow for development outside the strict provisions of the legislation, provided that the authorities approve.