Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

When Might AI Outsmart Us? It Depends Who You Ask

Jan 19, 2024 - time.com
The article discusses the varying predictions about when artificial general intelligence (AGI), systems that can perform any task a human can, will be developed. Some AI pioneers, like Shane Legg of Google DeepMind, predict AGI could be developed by 2028, while others, like Dario Amodei of Anthropic and Sam Altman of OpenAI, expect it within the next few years. However, a survey of AI experts suggests a 50% chance of AGI by 2047, and a group of "superforecasters" predicts a 75% chance by 2100. The article also discusses the "scaling hypothesis," the idea that increasing computational power and data will inevitably lead to AGI.

The article highlights the high stakes of AGI development, including potential human extinction and the replacement of human labor. It also mentions concerns about AI-generated deepfakes and AI empowering dangerous groups. Given these concerns and the possibility of AGI development by 2030, the article argues that policymakers and companies should prepare now, with measures such as safety research, mandatory safety testing, and coordination between entities developing powerful AI systems.

Key takeaways:

  • There are varying predictions about when artificial general intelligence (AGI), systems that can perform any task a human can, will be developed. Some experts, like Google DeepMind’s co-founder Shane Legg, estimate it could be as soon as 2028, while others predict it could take until 2047 or even 2100.
  • Many leaders in AI companies subscribe to the scaling hypothesis, which suggests that continuing to train AI models with increasing computational power and data will inevitably lead to AGI.
  • Despite the rapid progress in AI, there is a lot of uncertainty and disagreement about when AGI might be developed. Factors influencing these debates include the effectiveness of current AI building methods, the distance to the finish line, and people's fundamental beliefs about how much and how quickly the world is likely to change.
  • There are significant concerns about the societal implications of AI, with 89% of experts surveyed expressing substantial or extreme concern about AI-generated deepfakes and 73% similarly concerned that AI could empower dangerous groups. As such, there are calls for policymakers and companies to prepare now, with measures such as investment in safety research, mandatory safety testing, and coordination between entities developing powerful AI systems.
View Full Article

Comments (0)

Be the first to comment!