Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI announces team to build 'crowdsourced' governance ideas into its models | TechCrunch

Jan 16, 2024 - techcrunch.com
OpenAI is forming a new Collective Alignment team to implement public ideas on how to ensure its future AI models align with human values. The team, composed of researchers and engineers, will develop a system to collect and encode public input on the behavior of its models into OpenAI products and services. The initiative is an extension of OpenAI's public program launched last May, which aimed to establish a democratic process for deciding the rules AI systems should follow.

The goal of the program was to fund individuals, teams, and organizations to develop concepts that could answer questions about AI governance and guardrails. The work of the grant recipients ranged from video chat interfaces to platforms for crowdsourced audits of AI models. However, OpenAI's attempt to separate the program from its commercial interests has been met with skepticism, given CEO Sam Altman's criticisms of AI regulation.

Key takeaways:

  • OpenAI is forming a new Collective Alignment team to implement public ideas on how to ensure its AI models align with human values.
  • The Collective Alignment team is an extension of OpenAI's public program launched last year, which aimed to set up a democratic process for deciding AI systems' rules.
  • All the code used in the grantees' work has been made public, along with brief summaries of each proposal and high-level takeaways.
  • OpenAI's leadership, including CEO Sam Altman, argue that the pace of AI innovation is so fast that existing authorities can't adequately control the technology, hence the need to crowdsource the work.
View Full Article

Comments (0)

Be the first to comment!