The announcement is a response to recent criticism following the resignation of two members from OpenAI's Superalignment team, which led to doubts about the company's commitment to developing highly capable AI safely. There are also rumors that progress in large language models (LLMs) has plateaued recently, with models equivalent to GPT-4 in capability. OpenAI's recent release of GPT-4o, which is similar in ability to GPT-4 but faster, instead of a significantly superior model, has added to these speculations.
Key takeaways:
- OpenAI has announced the formation of a new 'Safety and Security Committee' to oversee risk management for its projects and operations. The committee will be led by OpenAI directors Bret Taylor, Adam D'Angelo, Nicole Seligman, and CEO Sam Altman.
- The committee's first task will be to evaluate and further develop the company's safety processes and safeguards over the next 90 days. The recommendations will be shared with the full board and an update will be publicly shared on adopted recommendations.
- The formation of the committee comes after criticism following the resignation of two members of OpenAI's Superalignment team, Ilya Sutskever and Jan Leike, which led to concerns about the company's commitment to developing highly capable AI safely.
- There are rumors that progress in large language models has plateaued recently, with models such as Anthropic's Claude Opus and Google's Gemini 1.5 Pro being roughly equivalent to the GPT-4 family in capability. OpenAI's recent release of GPT-4o, which is equivalent in ability to GPT-4 but faster, has added to these speculations.