Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images

Feb 22, 2024 - arstechnica.com
Google has paused its Gemini AI image-synthesis feature following criticism that it was inaccurately inserting diversity into historical images. Critics argued that the tool was creating a revisionist history by depicting multi-racial Nazis and medieval British kings of unlikely nationalities. The controversy has sparked debates about the role of AI in representing diversity and the potential for erasing historical realities of race and gender discrimination. Google is now working to address these issues and plans to re-release an improved version of the image generation feature soon.

The controversy highlights the ongoing struggle of AI researchers to balance the need for diversity and accuracy in AI outputs. Companies like OpenAI have previously faced similar issues and have developed techniques to insert diversity into image-generation prompts in a way that is hidden from the user. However, these techniques have also led to awkward and sometimes controversial results. As a solution, some experts suggest the need for a diverse set of AI assistants that reflect the diversity of languages, culture, value systems, and political opinions across the world.

Key takeaways:

  • Google has paused its Gemini AI image-synthesis feature due to criticism that it was inserting diversity into its images in a historically inaccurate way, such as depicting multi-racial Nazis and medieval British kings with unlikely nationalities.
  • OpenAI had previously invented a technique to insert terms reflecting diversity into image-generation prompts in a way that was hidden from the user, a technique that Google's Gemini system seems to use as well.
  • The controversy reflects the ongoing struggle in which AI researchers find themselves stuck in the middle of ideological and cultural battles online, with different factions demanding different results from AI products.
  • Google could potentially fix the issue by modifying its system instructions to avoid inserting diversity when the prompt involves a historical subject, and is currently working on improving these kinds of depictions.
View Full Article

Comments (0)

Be the first to comment!