The Superalignment team was formed in July last year and was led by Leike and OpenAI co-founder Ilya Sutskever, who also resigned this week. Despite publishing safety research and granting millions to outside researchers, the team struggled to secure necessary investments due to product launches consuming OpenAI leadership's attention. The team's work will now be headed by another OpenAI co-founder, John Schulman, but without a dedicated team, leading to concerns that OpenAI's AI development may not be as safety-focused as it could have been.
Key takeaways:
- OpenAI's Superalignment team, which is responsible for developing ways to govern and steer superintelligent AI systems, has been denied the promised 20% of the company's compute resources, leading to several team members resigning, including co-lead Jan Leike.
- Leike, a former DeepMind researcher, has publicly expressed his disagreement with OpenAI's leadership about the company's priorities, advocating for more focus on security, monitoring, preparedness, safety, adversarial robustness, alignment, confidentiality, societal impact, and related topics.
- OpenAI co-founder Ilya Sutskever, who also led the Superalignment team, resigned from the company following a conflict with OpenAI CEO Sam Altman, which served as a major distraction for the team.
- Following the departures of Leike and Sutskever, the Superalignment team will no longer exist as a dedicated team. Instead, its work will be carried out by a loosely associated group of researchers embedded in divisions throughout the company, raising concerns that OpenAI's AI development may not be as safety-focused as it could have been.