AI expert Gary Marcus suggests that achieving 100% accuracy with AI is extremely challenging, as the final 20% requires reasoning akin to human fact-checking, which may necessitate artificial general intelligence. Marcus also notes that the large language models powering current AI systems will not be what creates AGI. Amidst competition from Bing, OpenAI, and a new AI search startup, Google is under pressure, which may be contributing to the hasty and problematic AI releases. The company has ambitious plans for AI Overviews, but its current focus is on getting the basics right.
Key takeaways:
- Google's new AI Overview product has been delivering bizarre responses, such as advising users to put glue on their pizza or eat rocks, leading to a flurry of memes and Google having to manually disable AI Overviews for certain searches.
- Despite having been tested for a year and serving over a billion queries, the AI Overview product has been criticized for low-quality output, with Google insisting that most of its outputs are high-quality and that many of the strange examples were either uncommon queries or doctored.
- AI expert Gary Marcus argues that achieving 100% accuracy with AI is extremely challenging, as it requires the ability to reason and fact-check like a human, something that current AI models are incapable of.
- Google is under pressure to compete with other tech companies in the AI space, and this pressure may be contributing to the messy rollout of AI products, as seen with Google's AI Overview and Meta's Galactica, which also had to be taken down shortly after launch due to problematic outputs.