The article also highlights other recent AI news, including TechCrunch's series on women in AI, the launch of Stable Diffusion 3 by Stability AI, and Google's new Gemini-powered tool in Chrome. It also mentions a quiz game developed by McKinney to highlight AI bias, a public letter signed by AI luminaries calling for anti-deepfake legislation in the U.S., and the formation of a new AI safety organization by DeepMind. The article concludes with a caution from Berkeley researchers about the gender bias in Google's image search results.
Key takeaways:
- Google paused its AI chatbot Gemini’s ability to generate images of people after users complained about historical inaccuracies and biases. The company had implemented hardcoding to attempt to correct for biases in its model.
- Google's handling of race-based prompts in Gemini has been criticized for attempting to conceal the worst of the model’s biases, rather than addressing them in the broader context of the training data from which they arise.
- AI models are often lacking in explanation and transparency about their biases. Addressing these shortcomings head-on, in humble and transparent language, would be more effective than haphazard attempts at “fixing” what’s essentially unfixable bias.
- Other AI news includes TechCrunch launching a series highlighting notable women in AI, Stability AI announcing Stable Diffusion 3, Google's new Gemini-powered tool in Chrome, and hundreds of AI luminaries calling for anti-deepfake legislation in the U.S.