The departures have raised concerns about the future safety of OpenAI's work. With the superalignment team significantly reduced, there are doubts about how much forward-looking safety work can be expected from the company. While current products like ChatGPT are not seen as immediate threats, insiders and former employees express concern about the company's trajectory towards building and deploying AGI or superintelligence safely.
Key takeaways:
- OpenAI has been losing key employees, including Ilya Sutskever and Jan Leike, who were leaders of the company’s superalignment team, tasked with ensuring AI safety.
- Since November, at least five more safety-conscious employees have left the company, with many reportedly losing faith in CEO Sam Altman.
- Former employee Daniel Kokotajlo refused to sign the offboarding agreement, allowing him to criticize the company's approach to AI safety.
- With the departure of key safety team members, there are concerns about how much serious, forward-looking safety work can be expected from OpenAI in the future.