OpenAI's "preparedness" team will continuously assess the performance of its AI systems in four areas: cybersecurity risks and chemical, nuclear, and biological threats. The Safety Systems team will work to reduce the misuse of existing models and technologies, while the Superalignment team is focusing on the development of safe superintelligent models. The head of the readiness group, Aleksander Madry, will submit a monthly report to a new internal safety advisory board, which will then make suggestions to the company's board.
Key takeaways:
- OpenAI has announced a new "Preparedness Framework" to mitigate risks and misuse of its AI systems, which includes an AI Safety Plan and an advisory group to ensure safe AI models.
- The company recently hosted its first tech showcase, unveiling a range of new AI products. It plans to limit the use of its newest technology to locations deemed safe and is forming an advisory council to review safety reports.
- OpenAI has three safety teams: the "preparedness" team, the Safety Systems team, and the Superalignment team. These teams assess performance, reduce abuse, and work towards safe superintelligent models respectively.
- The company will only release AI models that have been assigned a risk rating of "medium" or "low" by its team. The final decision to release a new AI system lies with the leadership team, but the board can overturn this decision.