The article also highlights how AI tools are not neutral arbiters of information as they are trained by and subject to rules from humans. It warns against government intervention in AI bias and suggests that a diverse marketplace of AI tools is the best solution to overcome limitations or biases. The piece ends by predicting that AI results will become a rich source of inspiration for politicians looking to challenge Big Tech.
Key takeaways:
- Google's AI tool, Gemini, faced controversy for generating images of historical figures such as America's Founding Fathers, Vikings, and the Pope as people of color, which some saw as an attempt to rewrite history.
- Google paused Gemini's ability to generate people and acknowledged that the tool had overcompensated in some cases and been overly conservative in others, leading to incorrect images.
- AI tools are not neutral arbiters of information as they are trained by and subject to rules from humans, and they can be drafted into the culture war, with accusations of being too progressive or too conservative.
- Politicians are starting to scrutinize AI tools for potential bias, with House Judiciary Committee Chairman Jim Jordan asking Google's parent company Alphabet for documents relating to Gemini's content moderation, and Montana Attorney General Austin Knudsen accusing Gemini of providing inaccurate information that fits Google's political preference.