The paper also highlighted that the disagreement is not due to a lack of information or exposure to differing viewpoints, but rather deep-rooted differences in people's assumptions about how the world works. The study concluded that the disagreement is difficult to resolve as it is based on deep, hard-to-change differences in people's assumptions about how the world works and where the burden of proof should fall.
Key takeaways:
- A new study from the Forecasting Research Institute aimed to understand the differing views on the potential dangers of AI, by bringing together experts on AI and other existential risks, and “superforecasters” with a track record of successfully predicting world events.
- The study found that the two groups disagreed significantly, with AI experts generally more concerned about potential disaster than the superforecasters. Despite exposure to new information and differing viewpoints, both groups largely maintained their initial beliefs.
- The main disagreements were not based on short-term predictions, but on differing views of the long-term future of AI. Optimists generally believed that human-level AI will take longer to build than pessimists believed.
- The most significant source of disagreement was identified as “fundamental worldview disagreements”, essentially differing views on where the burden of proof lies in the debate over AI’s potential dangers.