Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI training its next major AI model, forms new safety committee

May 29, 2024 - arstechnica.com
OpenAI has announced the formation of a new "Safety and Security Committee" to manage risks associated with its projects and operations. This move comes as the company begins training its next frontier model, aimed at bringing the company closer to achieving artificial general intelligence (AGI). The committee, led by OpenAI directors Bret Taylor, Adam D'Angelo, Nicole Seligman, and CEO Sam Altman, will be responsible for making recommendations about AI safety to the company board of directors. The committee's first task is to evaluate and develop processes and safeguards over the next 90 days, after which it will share its recommendations with the board and the public.

The announcement is a response to recent criticism following the resignation of two members from OpenAI's Superalignment team, which led to doubts about the company's commitment to developing highly capable AI safely. There are also rumors that progress in large language models (LLMs) has plateaued recently, with models equivalent to GPT-4 in capability. OpenAI's recent release of GPT-4o, which is similar in ability to GPT-4 but faster, instead of a significantly superior model, has added to these speculations.

Key takeaways:

  • OpenAI has announced the formation of a new 'Safety and Security Committee' to oversee risk management for its projects and operations. The committee will be led by OpenAI directors Bret Taylor, Adam D'Angelo, Nicole Seligman, and CEO Sam Altman.
  • The committee's first task will be to evaluate and further develop the company's safety processes and safeguards over the next 90 days. The recommendations will be shared with the full board and an update will be publicly shared on adopted recommendations.
  • The formation of the committee comes after criticism following the resignation of two members of OpenAI's Superalignment team, Ilya Sutskever and Jan Leike, which led to concerns about the company's commitment to developing highly capable AI safely.
  • There are rumors that progress in large language models has plateaued recently, with models such as Anthropic's Claude Opus and Google's Gemini 1.5 Pro being roughly equivalent to the GPT-4 family in capability. OpenAI's recent release of GPT-4o, which is equivalent in ability to GPT-4 but faster, has added to these speculations.
View Full Article

Comments (0)

Be the first to comment!