Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI and Other Tech Giants Will Have to Warn the US Government When They Start New AI Projects

Jan 27, 2024 - wired.com
The Biden administration is preparing to use the Defense Production Act to require tech companies to inform the US government about significant AI developments, particularly those involving large language models like OpenAI's ChatGPT. The new rule, which could take effect as early as next week, will give the government access to information about sensitive projects within tech companies such as OpenAI, Google, and Amazon. Companies will also be required to provide information on safety testing of their new AI models.

The new rules are part of a White House executive order issued last October, which tasked the Commerce Department with developing a scheme for companies to report details about powerful new AI models. The order also called for cloud computing providers like Amazon, Microsoft, and Google to inform the government when a foreign company uses their resources to train a large language model. The Commerce Department is also working on guidelines to help companies understand the risks associated with their AI models, including potential human rights abuses.

Key takeaways:

  • The Biden administration is preparing to use the Defense Production Act to compel tech companies to inform the government when they train an AI model using a significant amount of computing power.
  • Companies will also have to provide information on safety testing being done on their new AI creations, potentially giving the US government access to key information about sensitive projects inside tech companies like OpenAI, Google, and Amazon.
  • The new rules are being implemented as part of a White House executive order issued last October, which also requires cloud computing providers to inform the government when a foreign company uses their resources to train a large language model.
  • The National Institutes of Standards and Technology (NIST) is working to define standards for testing the safety of AI models as part of the creation of a new US government AI Safety Institute, with guidelines potentially including ways of ensuring AI cannot be used to commit human rights abuses.
View Full Article

Comments (0)

Be the first to comment!