A draft of the EU AI Act suggests that foundation model developers should assess potential risks, test models during and after development, examine bias in training data, validate data, and publish technical documents before release. Open-source companies have urged the EU to consider smaller companies, arguing that compliance with rules may be challenging for some developers. The EU's AI Act has been viewed as a potential model for AI regulation globally, despite its slower progress compared to regions like China.
Key takeaways:
- The EU AI Act, which aims to regulate foundation models, is still under debate and unlikely to pass before December due to disagreements among European lawmakers.
- Spain, currently leading the EU, is advocating for regular checks for vulnerabilities and a tiered system of regulation based on the number of users a model has.
- A draft of the EU AI Act suggests that foundation model developers should assess potential risks, test models throughout development and after market release, examine bias in training data, validate data, and publish technical documents before release.
- While the EU's AI Act is seen as a potential model for other countries, the EU's progress in passing the legislation has been slower than some other international players, such as China, which enacted its rules in August.