Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded

May 17, 2024 - vox.com
OpenAI, the maker of ChatGPT, has been experiencing a significant loss of employees, particularly those focused on AI safety. Key departures include Ilya Sutskever and Jan Leike, leaders of the superalignment team, which is tasked with ensuring AI aligns with its creators' goals. This exodus has been ongoing since last November when the board attempted to fire CEO Sam Altman, who managed to retain his position. At least five other safety-conscious employees have left since then, with insiders suggesting a loss of faith in Altman's leadership and a perceived lack of commitment to safety.

The departures have raised concerns about the future safety of OpenAI's work. With the superalignment team significantly reduced, there are doubts about how much forward-looking safety work can be expected from the company. While current products like ChatGPT are not seen as immediate threats, insiders and former employees express concern about the company's trajectory towards building and deploying AGI or superintelligence safely.

Key takeaways:

  • OpenAI has been losing key employees, including Ilya Sutskever and Jan Leike, who were leaders of the company’s superalignment team, tasked with ensuring AI safety.
  • Since November, at least five more safety-conscious employees have left the company, with many reportedly losing faith in CEO Sam Altman.
  • Former employee Daniel Kokotajlo refused to sign the offboarding agreement, allowing him to criticize the company's approach to AI safety.
  • With the departure of key safety team members, there are concerns about how much serious, forward-looking safety work can be expected from OpenAI in the future.
View Full Article

Comments (0)

Be the first to comment!