The agreement follows the world’s first AI Safety Commitments from 16 companies, including Amazon, Google, IBM, Microsoft, and Samsung Electronics. These companies have committed to not develop or deploy a model or system if risks cannot be mitigated below certain thresholds. The commitments aim to ensure transparency and accountability in the development of safe AI. The U.K. and the U.S. also recently signed a memorandum of understanding to collaborate on research and guidance on AI safety.
Key takeaways:
- Government officials and AI industry executives have agreed to apply safety measures in the AI field and establish an international safety research network.
- The British government announced a new agreement between 10 countries and the EU to establish an international network similar to the UK’s AI Safety Institute to accelerate the advancement of AI safety science.
- Leaders at the AI Summit in Seoul agreed to the Seoul Declaration, emphasizing increased international collaboration in building AI that prioritizes human-centric, trustworthy, and responsible approaches to address global issues and uphold human rights.
- Last month, the U.K. and the U.S. sealed a partnership memorandum of understanding to collaborate on research, safety evaluation, and guidance on AI safety, and 16 companies involved in AI, including Amazon, Google, IBM, and Microsoft, have agreed to AI Safety Commitments.