The executive order comes as other governments, including the European Union, are also developing regulations to manage AI risks. The Biden administration's move is part of a broader effort to address alleged abuses by Silicon Valley and to mitigate potential harms of AI on jobs, surveillance, and democracy. In July, seven Big Tech and AI companies, including Amazon, Google, and Microsoft, made voluntary commitments to promote safe and transparent AI development.
Key takeaways:
- The Biden administration is preparing to introduce an executive order on artificial intelligence (AI) that will regulate the technology, with the order expected to be released two days before an international AI summit.
- The executive order will require advanced AI models to undergo assessments before being used by federal workers and will require federal government agencies to consider incorporating AI into their work, particularly for enhancing national cyber defenses.
- The regulation of AI is a significant test for the Biden administration, which is also addressing alleged abuses of Silicon Valley and working on bipartisan legislation to respond to the challenges posed by AI.
- Seven Big Tech and AI companies, including Amazon, Google, and Microsoft, have made voluntary commitments to help move toward safe, secure, and transparent development of AI.