The new guidance also requires agencies to monitor their AI systems frequently, independently evaluate the safety risk of each AI platform, and ensure deployed AI meets safeguards against algorithmic discrimination and provides public transparency. Government-owned AI models, code, and data should be made public unless they pose a risk to operations. The US still lacks laws regulating AI, with the AI executive order providing guidelines for government agencies on how to approach the technology.
Key takeaways:
- All US federal agencies are now required to have a senior leader overseeing all AI systems they use, and must establish AI governance boards to coordinate how AI is used within the agency.
- Agencies will have to submit an annual report to the Office of Management and Budget (OMB) listing all AI systems they use, any associated risks, and how they plan on mitigating these risks.
- The chief AI officer does not necessarily have to be a political appointee, and governance boards must be created by the summer. This expands on previously announced policies outlined in the Biden Administration’s AI executive order.
- Under the new guidance, any government-owned AI models, code, and data should be released to the public unless they pose a risk to government operations. The United States still does not have laws regulating AI.