Tegmark's non-profit Future of Life Institute called for a six-month pause in advanced AI research last year due to these fears, but no pause was agreed. Instead, AI summits have led the field of AI regulation. Tegmark argues that the downplaying of severe risks is not healthy and is a result of industry lobbying. He believes that tech leaders feel trapped in an impossible situation where they can't stop even if they want to, and that the only way to prioritize safety is if the government imposes safety standards.
Key takeaways:
- Max Tegmark, a leading scientist and AI campaigner, warns that big tech is distracting the world from the existential risk posed by artificial intelligence.
- Tegmark's non-profit Future of Life Institute called for a six-month pause in advanced AI research due to these fears, but no pause was agreed.
- Despite the concerns raised by experts, the focus of international AI regulation has shifted away from existential risk.
- Tegmark believes that the downplaying of severe risks is a result of industry lobbying and compares it to the delay in regulation after the discovery of the link between smoking and lung cancer.