Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI scraps team that researched risk of 'rogue' AI

May 18, 2024 - businessinsider.com
OpenAI has disbanded its Superalignment team, which was dedicated to mitigating AI risks, just days after the team's leaders, Ilya Sutskever and Jan Leike, announced their resignations. The team, created in July 2023, aimed to solve the core technical challenges of superintelligence alignment in four years, a goal the company admitted was ambitious. The dissolution of the team came in the same week that OpenAI launched GPT-4o, its most human-like AI yet.

Sutskever, a co-founder and former chief scientist at OpenAI, had played a role in the ousting of CEO Sam Altman in November. Despite expressing regret for contributing to Altman's removal, his future at OpenAI had been uncertain since Altman's reinstatement. Leike, in a series of posts, stated his departure came after disagreements about the company's core priorities, accusing OpenAI of prioritizing "shiny products" over safety. Some of the Superalignment team's remaining members have been integrated into other OpenAI teams.

Key takeaways:

  • OpenAI has dissolved its Superalignment team, which was dedicated to mitigating AI risks, following the resignation of its leaders, Ilya Sutskever and Jan Leike.
  • Ilya Sutskever, a cofounder and former chief scientist at OpenAI, played a role in the ousting of CEO Sam Altman and his future at OpenAI had been uncertain since Altman's reinstatement.
  • The mission of the Superalignment team was to use 20% of OpenAI's computing power over the next four years to build a human-level automated alignment researcher, a goal that the company admitted was incredibly ambitious.
  • Jan Leike criticized OpenAI for prioritizing the release of "shiny products" over safety, stating that building generative AI is an inherently dangerous endeavor and that OpenAI must become a safety-first AGI company.
View Full Article

Comments (0)

Be the first to comment!