The preliminary agreement also includes classification of GPAIs with "systemic risk", with criteria for a model getting this designation being that it has "high impact capabilities", including when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25. Other obligations for providers of GPAIs with systemic risk include undertaking evaluation with standardized protocols and state of the art tools, documenting and reporting serious incidents, conducting and documenting adversarial testing, ensuring an adequate level of cybersecurity, and reporting actual or estimated energy consumption of the model.
Key takeaways:
- A preliminary agreement on how to regulate artificial intelligence, specifically foundational models/general purpose AIs (GPAIs), has been reached by European Union lawmakers, according to a leaked proposal.
- There is a partial carve out from some obligations for GPAI systems that are provided under free and open source licences, with some exceptions, including for “high risk” models.
- The preliminary agreement retains classification of GPAIs with so-called “systemic risk”, with criteria for a model getting this designation being that it has “high impact capabilities”.
- Other obligations for providers of GPAIs with systemic risk include undertaking evaluation with standardized protocols and state of the art tools, documenting and reporting serious incidents, conducting and documenting adversarial testing, ensuring an adequate level of cybersecurity, and reporting actual or estimated energy consumption of the model.