However, some of the AI's mistakes can be dangerous, like providing incorrect advice on handling a rattlesnake bite or misidentifying a poisonous mushroom. These errors highlight the inherent problem of training AI models on internet content, which can often be misleading or false. Despite extensive testing and refinement, the AI can still be "poisoned" by its own mistakes, leading to a cycle of misinformation.
Key takeaways:
- Google's AI search feature has been providing incorrect and sometimes dangerous information, leading to criticism and memes on social media.
- Despite extensive testing, Google's AI has made errors such as suggesting running with scissors as a cardio exercise and recommending harmful actions for treating a rattlesnake bite.
- These AI errors highlight the inherent problem of training AI models on internet data, which can be unreliable or false.
- While tech companies often downplay the impact of these flaws, they can serve as useful feedback for refining AI systems and highlight the need for more rigorous testing before product release.