Critics argue that more needs to be done to hold these companies accountable, suggesting that voluntary safeguards are not enough. The commitments are also seen as potentially favoring larger companies, as smaller players may struggle with the cost of adhering to regulatory standards. The White House has consulted with several countries on these voluntary commitments, reflecting a growing global interest in AI regulation. However, concerns about the impact of AI on jobs, market competition, and environmental resources remain unaddressed.
Key takeaways:
- President Joe Biden announced that his administration has secured voluntary commitments from seven U.S. companies, including Amazon, Google, Meta, and Microsoft, to ensure that their AI products are safe before they release them.
- The companies have committed to security testing, carried out in part by independent experts, to guard against major risks such as biosecurity and cybersecurity. They will also publicly report flaws and risks in their technology.
- The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.
- Some advocates for AI regulations said more needs to be done to hold the companies and their products accountable, calling for a wider public deliberation on the issue.