In response to these concerns, OpenAI has pledged to retain third-party safety, security, and technical experts to support the committee's work. However, the company has not provided details about the size or composition of this outside expert group, nor has it clarified the limits of the group's power and influence over the committee. Critics argue that the company's move to address "valid criticisms" of its work is subjective and that it falls short of CEO Sam Altman's previous promise to allow outsiders to play a significant role in OpenAI's governance.
Key takeaways:
- OpenAI has formed a new committee to oversee safety and security decisions, staffed by company insiders including CEO Sam Altman, raising concerns among ethicists.
- The company has seen several high-profile departures from its safety team, with ex-staffers voicing concerns about an alleged de-prioritization of AI safety.
- Former OpenAI board members Helen Toner and Tasha McCauley have expressed doubts about the company's ability to hold itself accountable, particularly under Altman's leadership.
- Despite promises of outsider involvement in governance, OpenAI has yet to implement such measures, and the company's new Safety and Security Committee has been criticized for potentially lacking real oversight power.