This is not the first time Google's AI has faced backlash for its outputs. In 2015, an AI image classification tool made by Google misclassified black men as gorillas, causing outrage. The company promised to fix the issue, but the solution was a workaround that involved blocking the tech from recognizing gorillas altogether. The company is now working to improve the accuracy of Gemini's image generation.
Key takeaways:
- Google has temporarily suspended the ability of its AI suite, Gemini, to generate images of people to improve the historical accuracy of outputs.
- The decision comes after criticism over the AI producing incongruous images of historical figures, such as depicting U.S. Founding Fathers as American Indian, black or Asian.
- Google has faced previous criticism for AI bias, including an incident in 2015 where an AI image classification tool misclassified black men as gorillas.
- The company is working to address these issues and plans to re-release an improved version of the image generation tool soon.