Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

The Princeton researchers calling out ‘AI snake oil’ | Semafor

Sep 19, 2023 - semafor.com
The interview with Narayanan and Kapoor discusses the issues surrounding AI technology and its applications. They highlight the difference between predictive and generative AI, stating that most of the "snake oil" or misleading applications are found in predictive AI. They also discuss the potential harm of generative AI, such as non-consensual deepfakes, and the need for AI companies to publish regular transparency reports. Kapoor argues against discrediting non-peer-reviewed research on arXiv.org, stating that it reduces gatekeeping in academia and allows for innovative research.

The interviewees also address the idea of existential risk (x-risk) from AI, which they believe is based on a "tower of fallacies". They argue that the real risks come from people directing AI to do harmful things, rather than AI developing agency on its own. They also express concern about the concentration of power in a few AI companies as a proposed solution to x-risk. They believe that the media exaggerates the concern for x-risk in the AI research community, although they acknowledge that strategic funding by organizations focused on x-risk may distort the perception of its importance.

Key takeaways:

  • Narayanan and Kapoor argue that most of the "snake oil" in AI is concentrated in predictive AI, where tools are sold based on unproven or untested claims.
  • They suggest that AI companies should start publishing regular transparency reports, similar to social media giants, to provide insights into how their technology is being used and its potential harms.
  • While acknowledging concerns about the pace and quality of research on arXiv.org, Kapoor defends the platform for reducing gatekeeping in academia and promoting research that challenges established norms.
  • Both researchers dispute the idea that artificial intelligence presents an existential risk to humanity, arguing that the concept is based on a "tower of fallacies" and that the proposed solutions would only increase risks by concentrating power in the hands of a few AI companies.
View Full Article

Comments (0)

Be the first to comment!