Sutskever, a co-founder and former chief scientist at OpenAI, had played a role in the ousting of CEO Sam Altman in November. Despite expressing regret for contributing to Altman's removal, his future at OpenAI had been uncertain since Altman's reinstatement. Leike, in a series of posts, stated his departure came after disagreements about the company's core priorities, accusing OpenAI of prioritizing "shiny products" over safety. Some of the Superalignment team's remaining members have been integrated into other OpenAI teams.
Key takeaways:
- OpenAI has dissolved its Superalignment team, which was dedicated to mitigating AI risks, following the resignation of its leaders, Ilya Sutskever and Jan Leike.
- Ilya Sutskever, a cofounder and former chief scientist at OpenAI, played a role in the ousting of CEO Sam Altman and his future at OpenAI had been uncertain since Altman's reinstatement.
- The mission of the Superalignment team was to use 20% of OpenAI's computing power over the next four years to build a human-level automated alignment researcher, a goal that the company admitted was incredibly ambitious.
- Jan Leike criticized OpenAI for prioritizing the release of "shiny products" over safety, stating that building generative AI is an inherently dangerous endeavor and that OpenAI must become a safety-first AGI company.