The interviewees also address the idea of existential risk (x-risk) from AI, which they believe is based on a "tower of fallacies". They argue that the real risks come from people directing AI to do harmful things, rather than AI developing agency on its own. They also express concern about the concentration of power in a few AI companies as a proposed solution to x-risk. They believe that the media exaggerates the concern for x-risk in the AI research community, although they acknowledge that strategic funding by organizations focused on x-risk may distort the perception of its importance.
Key takeaways:
- Narayanan and Kapoor argue that most of the "snake oil" in AI is concentrated in predictive AI, where tools are sold based on unproven or untested claims.
- They suggest that AI companies should start publishing regular transparency reports, similar to social media giants, to provide insights into how their technology is being used and its potential harms.
- While acknowledging concerns about the pace and quality of research on arXiv.org, Kapoor defends the platform for reducing gatekeeping in academia and promoting research that challenges established norms.
- Both researchers dispute the idea that artificial intelligence presents an existential risk to humanity, arguing that the concept is based on a "tower of fallacies" and that the proposed solutions would only increase risks by concentrating power in the hands of a few AI companies.