Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI is haemorrhaging safety talent

May 17, 2024 - transformernews.ai
OpenAI, a leading artificial intelligence research lab, is experiencing a significant exodus of employees, particularly those concerned about AI safety. High-profile departures include Superalignment co-leads Ilya Sutskever and Jan Leike, along with six other safety-focused employees. Some of the departing researchers have cited concerns over the company's safety culture and priorities. The company's Superalignment team, which focused on AI safety, has also been shut down.

The departures follow a failed attempt to oust OpenAI's CEO Sam Altman last November, which led to an employee revolt and raised concerns about the company's governance. The current situation is reminiscent of a similar wave of departures in 2021, which included key safety-focused employees. The ongoing issue raises questions about the company's commitment to AI safety and its overall direction.

Key takeaways:

  • Several safety-minded employees, including Ilya Sutskever and Jan Leike, have recently resigned from OpenAI, suggesting a decline in the company's safety-first culture.
  • Jan Leike and Daniel Kokotajlo have explicitly stated that they left due to concerns over the company's approach to safety, with Leike criticizing the company's focus on "shiny products" over safety processes.
  • OpenAI's Superalignment team, which was led by some of the employees who resigned, has been shut down and its work is being integrated across the company's research efforts.
  • The recent departures follow a failed attempt to oust Sam Altman, OpenAI's CEO, last November, which raised concerns about the company's governance and led to an employee revolt.
View Full Article

Comments (0)

Be the first to comment!