Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Why can’t anyone agree on how dangerous AI will be?

Mar 16, 2024 - vox.com
The Forecasting Research Institute conducted a study to understand the disagreement between AI experts and superforecasters about the potential dangers of AI. The study found that the experts were more concerned about AI risks, while the superforecasters were less worried. Despite exposure to new information and opposing viewpoints, neither group significantly changed their stance. The main points of disagreement were related to the long-term future of AI and fundamental worldview disagreements about the burden of proof in the debate.

The paper also highlighted that the disagreement is not due to a lack of information or exposure to differing viewpoints, but rather deep-rooted differences in people's assumptions about how the world works. The study concluded that the disagreement is difficult to resolve as it is based on deep, hard-to-change differences in people's assumptions about how the world works and where the burden of proof should fall.

Key takeaways:

  • A new study from the Forecasting Research Institute aimed to understand the differing views on the potential dangers of AI, by bringing together experts on AI and other existential risks, and “superforecasters” with a track record of successfully predicting world events.
  • The study found that the two groups disagreed significantly, with AI experts generally more concerned about potential disaster than the superforecasters. Despite exposure to new information and differing viewpoints, both groups largely maintained their initial beliefs.
  • The main disagreements were not based on short-term predictions, but on differing views of the long-term future of AI. Optimists generally believed that human-level AI will take longer to build than pessimists believed.
  • The most significant source of disagreement was identified as “fundamental worldview disagreements”, essentially differing views on where the burden of proof lies in the debate over AI’s potential dangers.
View Full Article

Comments (0)

Be the first to comment!