The article also highlights the potential risks of relying on AI in search, suggesting that tech companies could mitigate these risks by being more transparent about generative AI, publishing information about the quality of facts provided, using coding techniques to help the bot self-fact-check, opening up their tools to researchers for stress-testing, and adding more human oversight to their outputs. However, Google's recent layoffs in its Google News division, which has worked with professional fact-checking organizations in the past, suggest that the company is not investing more in fact-checking as it develops its generative AI tool.
Key takeaways:
- Google's search engine is becoming less reliable due to the influence of generative AI, with instances of incorrect information being presented as fact, such as the claim that no African country begins with the letter 'K'.
- The misinformation is often sourced from user posts on online message boards, which are then scraped by Google's crawlers and presented as a featured answer.
- Google's generative-AI tool is also liable to produce flawed AI writing, leading to the spread of misinformation or nonsensical information.
- Experts suggest that tech companies could mitigate the potential harms of relying on AI in search by becoming more transparent about generative AI, publishing information about the quality of facts provided, and adding more human oversight to their outputs.