Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Moving AI governance forward

Jul 21, 2023 - openai.com
OpenAI and other leading AI labs are making voluntary commitments to enhance the safety, security, and trustworthiness of AI technology and services, in a process coordinated by the White House. The commitments are designed to guide the development and use of AI technology, and will remain in effect until regulations covering the same issues are enforced. The commitments apply to generative models that are more powerful than the current industry frontier, and include internal and external red-teaming of models, information sharing among companies and governments, investing in cybersecurity, incentivizing third-party discovery of vulnerabilities, and developing mechanisms to enable users to understand if content is AI-generated.

The companies also commit to publicly report model capabilities, limitations, and appropriate use domains, prioritize research on societal risks posed by AI systems, and develop AI systems to address society's greatest challenges. The commitments are part of an ongoing collaboration with governments, civil society organizations, and others around the world to advance AI governance. The companies also pledge to continue to invest in research areas that can inform regulation, such as techniques for assessing potentially dangerous capabilities in AI models.

Key takeaways:

  • OpenAI and other leading AI labs are making voluntary commitments to reinforce the safety, security, and trustworthiness of AI technology, coordinated by the White House.
  • These commitments include internal and external red-teaming of models, sharing information among companies and governments about risks and vulnerabilities, and investing in cybersecurity to protect proprietary and unreleased model weights.
  • Companies also commit to developing mechanisms that enable users to understand if content is AI-generated, publicly reporting model capabilities and limitations, and prioritizing research on societal risks posed by AI systems.
  • These commitments apply only to generative models that are more powerful than the current industry frontier, such as GPT-4, Claude 2, PaLM 2, Titan and DALL-E 2.
View Full Article

Comments (0)

Be the first to comment!