Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Google’s Relationship With Facts Is Getting Wobblier

Nov 07, 2023 - theatlantic.com
The article discusses how Google's search engine, which uses generative AI, is becoming less reliable due to the spread of misinformation. The author cites an example where Google incorrectly states that no African country starts with the letter 'K', despite Kenya being a clear example. The misinformation was originally written by ChatGPT and was picked up by Google's crawlers from a user post on Hacker News, which was quoting from a website called Emergent Mind. The author argues that Google's generative AI can be manipulated by false or nonsensical information, making it less reliable for clear, accessible facts.

The article also highlights the potential risks of relying on AI in search, suggesting that tech companies could mitigate these risks by being more transparent about generative AI, publishing information about the quality of facts provided, using coding techniques to help the bot self-fact-check, opening up their tools to researchers for stress-testing, and adding more human oversight to their outputs. However, Google's recent layoffs in its Google News division, which has worked with professional fact-checking organizations in the past, suggest that the company is not investing more in fact-checking as it develops its generative AI tool.

Key takeaways:

  • Google's search engine is becoming less reliable due to the influence of generative AI, with instances of incorrect information being presented as fact, such as the claim that no African country begins with the letter 'K'.
  • The misinformation is often sourced from user posts on online message boards, which are then scraped by Google's crawlers and presented as a featured answer.
  • Google's generative-AI tool is also liable to produce flawed AI writing, leading to the spread of misinformation or nonsensical information.
  • Experts suggest that tech companies could mitigate the potential harms of relying on AI in search by becoming more transparent about generative AI, publishing information about the quality of facts provided, and adding more human oversight to their outputs.
View Full Article

Comments (0)

Be the first to comment!