The goal of the program was to fund individuals, teams, and organizations to develop concepts that could answer questions about AI governance and guardrails. The work of the grant recipients ranged from video chat interfaces to platforms for crowdsourced audits of AI models. However, OpenAI's attempt to separate the program from its commercial interests has been met with skepticism, given CEO Sam Altman's criticisms of AI regulation.
Key takeaways:
- OpenAI is forming a new Collective Alignment team to implement public ideas on how to ensure its AI models align with human values.
- The Collective Alignment team is an extension of OpenAI's public program launched last year, which aimed to set up a democratic process for deciding AI systems' rules.
- All the code used in the grantees' work has been made public, along with brief summaries of each proposal and high-level takeaways.
- OpenAI's leadership, including CEO Sam Altman, argue that the pace of AI innovation is so fast that existing authorities can't adequately control the technology, hence the need to crowdsource the work.