Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI forms new team to assess “catastrophic risks” of AI

Oct 26, 2023 - theverge.com
OpenAI is creating a new team to address the "catastrophic risks" associated with AI, including nuclear threats, chemical, biological, and radiological threats, and autonomous replication. The team will also focus on the potential of AI to deceive humans and cybersecurity threats. The team will be led by Aleksander Madry, currently on leave from his role as the director of MIT’s Center for Deployable Machine Learning, and will develop a "risk-informed development policy" to monitor AI models.

OpenAI CEO Sam Altman has previously warned about the potential dangers of AI, stating that mitigating the risk of extinction from AI should be a global priority. He also suggested that governments should treat AI with the same seriousness as nuclear weapons.

Key takeaways:

  • OpenAI is creating a new team to mitigate the potential catastrophic risks associated with AI, including nuclear threats, chemical, biological, and radiological threats, and autonomous replication.
  • The team will also address risks such as AI's ability to deceive humans and cybersecurity threats.
  • Aleksander Madry, on leave from his role as the director of MIT’s Center for Deployable Machine Learning, will lead the preparedness team.
  • OpenAI CEO Sam Altman has previously warned about the potential for catastrophic events caused by AI and suggested that governments should treat AI as seriously as nuclear weapons.
View Full Article

Comments (0)

Be the first to comment!