The amendments also mean AI labs no longer need to submit certifications of safety test results under penalty of perjury, but instead submit public statements outlining their safety practices. The bill now requires developers to provide "reasonable care" AI models do not pose a significant risk of causing catastrophe, rather than the previous "reasonable assurance". The bill now also includes a protection for open source fine-tuned models. Despite these changes, the bill still holds developers liable for the dangers of their AI models, a point of contention for many critics. The bill is now headed to California’s Assembly floor for a final vote.
Key takeaways:
- California's SB 1047, a bill aimed at preventing AI disasters, has been amended following opposition from Silicon Valley, with changes suggested by AI firm Anthropic and other opponents.
- The bill no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred, and AI labs are no longer required to submit certifications of safety test results under penalty of perjury.
- SB 1047 no longer creates the Frontier Model Division (FMD), a new government agency, but still creates the Board of Frontier Models within the existing Government Operations Agency.
- The bill is now headed to California’s Assembly floor for a final vote, and if it passes, it will be referred back to California’s Senate for a vote due to these latest amendments. If it passes both, it will head to Governor Newsom’s desk, where it could be vetoed or signed into law.