Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

Jun 10, 2024 - futurism.com
Former OpenAI governance researcher, Daniel Kokotajlo, has warned that the odds of AI causing catastrophic harm to humanity are greater than a coin flip. In an interview with The New York Times, Kokotajlo accused OpenAI of ignoring the risks posed by artificial general intelligence (AGI) in their race to develop it. He estimated the chance of AI causing significant harm to humanity at around 70%. Kokotajlo, along with other former and current AI industry employees, have released an open letter asserting their "right to warn" the public about the risks posed by AI.

Kokotajlo joined OpenAI in 2022 and became convinced that AGI would be achieved by 2027, with a high probability of causing catastrophic harm. He urged OpenAI CEO Sam Altman to focus more on safety measures, but felt his concerns were not taken seriously. In April, he resigned from the company, stating in an email that he had "lost confidence that OpenAI will behave responsibly" in its pursuit of near-human-level AI. OpenAI responded to the concerns by stating their commitment to providing safe AI systems and engaging in rigorous debate about the technology's risks.

Key takeaways:

  • Former OpenAI governance researcher, Daniel Kokotajlo, has accused the company of ignoring the potential risks of artificial general intelligence (AGI), claiming that there is a 70% chance that AI could cause catastrophic harm or even destroy humanity.
  • Kokotajlo, along with other former and current employees of Google DeepMind and Anthropic, have released an open letter asserting their "right to warn" the public about the risks posed by AI.
  • Kokotajlo quit OpenAI in April, stating in an email that he had "lost confidence that OpenAI will behave responsibly" as it continues to develop near-human-level AI.
  • OpenAI responded to the concerns by stating that they are proud of their track record in providing the most capable and safest AI systems and that they believe in their scientific approach to addressing risk. They also mentioned having avenues for employees to express their concerns, including an anonymous integrity hotline and a Safety and Security Committee.
View Full Article

Comments (0)

Be the first to comment!