Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI No Longer Takes Safety Seriously

May 22, 2024 - lawfaremedia.org
OpenAI, a leading lab in artificial intelligence (AI), recently disbanded its Superalignment team, which was dedicated to ensuring the safety of AI products. The team's co-heads, Ilya Sutskever and Jan Leike, resigned from the company, with Leike explicitly protesting against the company's apparent lack of commitment to mitigating AI risks. Other safety-focused employees have also left the company, raising concerns about OpenAI's commitment to AI safety.

The departures and disbandment of the Superalignment team suggest a shift in OpenAI's approach to AI safety, which it previously acknowledged as a potential existential threat to humanity. Critics argue that OpenAI may be prioritizing profit over public safety, a concern amplified by the company's increasing user counts, rising valuation, and potential regulatory threats.

Key takeaways:

  • OpenAI, a leading lab pursuing AGI, recently disbanded its Superalignment team, which was dedicated to ensuring its AI products did not pose a threat to humanity. This followed the resignation of the team's co-heads, Ilya Sutskever and Jan Leike.
  • Several other employees with safety-focused roles have also left OpenAI recently, raising concerns about the company's commitment to AI safety.
  • OpenAI has previously acknowledged the potential existential threat posed by AI and has taken steps to mitigate this risk, such as dedicating 20% of its computing resources to the Superalignment team and adopting an unusual governance structure.
  • The recent departures and disbanding of the Superalignment team suggest a shift in OpenAI's approach to AI safety, which could be a cause for concern given the potential dangers of advanced AI.
View Full Article

Comments (0)

Be the first to comment!