Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Thread by @janleike on Thread Reader App

May 19, 2024 - threadreaderapp.com
Jan Leike, the former head of alignment, superalignment lead, and executive at OpenAI, has announced his departure from the company. Leike, who led the launch of the first RLHF LLM with InstructGPT and the publication of the first scalable oversight on LLMs, cited disagreements with OpenAI's leadership over the company's core priorities as the reason for his departure. He believes more focus should be placed on preparing for the next generations of models, including security, monitoring, preparedness, safety, adversarial robustness, alignment, confidentiality, societal impact, and related topics.

Leike expressed concern that OpenAI is not on the right trajectory to address these issues, stating that his team has been struggling for resources to conduct crucial research. He criticized the company for prioritizing "shiny products" over safety culture and processes, and called for OpenAI to become a "safety-first AGI company". He urged his former colleagues to understand the gravity of their work and to drive the necessary cultural change within the company.

Key takeaways:

  • Jan Leike has stepped down from his roles at OpenAI, citing disagreements with the company's leadership over its core priorities.
  • Leike believes more focus should be on preparing for the next generations of models, including aspects such as security, monitoring, preparedness, safety, adversarial robustness, alignment, confidentiality, societal impact, and more.
  • He expressed concern that OpenAI is not on a trajectory to get these aspects right, and that safety culture and processes have been overshadowed by product development.
  • Leike urges OpenAI to become a safety-first AGI company and encourages employees to act with the seriousness appropriate for the development of AGI.
View Full Article

Comments (0)

Be the first to comment!