Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI was asked to create images of Black African docs treating white kids. How'd it go?

Oct 07, 2023 - npr.org
The article discusses a study conducted by Arsenii Alenichev, a social scientist and postdoctoral fellow with the Oxford-Johns Hopkins Global Infectious Disease Ethics Collaborative, who used an AI program to generate images based on specific prompts. Despite specifying "Black African doctors providing care for white suffering children," the AI consistently depicted the children as Black and, in some cases, the doctors as white. This outcome, according to Alenichev, reflects the AI's inherent bias, likely influenced by the predominance of "white savior" imagery in its training data.

The article also highlights the efforts of Malik Afegbua, a Nigerian filmmaker, who successfully manipulated the AI to generate images of elegant African elders on fashion runways, challenging the stereotypes of older people. The piece concludes by emphasizing the need for the global health community to acknowledge and address the biases in AI-generated images. It also mentions the use of AI-generated images by organizations like the World Health Organization, underscoring the real-world implications of these biases.

Key takeaways:

  • Arsenii Alenichev, a social scientist and postdoctoral fellow with the Oxford-Johns Hopkins Global Infectious Disease Ethics Collaborative, conducted an experiment to challenge the "white savior" stereotype in AI-generated images. Despite specifying "Black African doctors providing care for white suffering children", the AI program often depicted the children as Black and occasionally the doctors as white.
  • The AI program used, Midjourney, generates images based on a massive database of existing photos and images described with keywords. The results are essentially remixes of existing content, which often perpetuate stereotypes due to the prevalence of "white savior" imagery in the database.
  • Malik Afegbua, a Nigerian filmmaker, managed to manipulate AI to generate images that challenged stereotypes of older people by feeding it specific images to influence the style and content of the finished result.
  • Alenichev argues that AI is not neutral or apolitical and that the global health community needs to discuss responsibility for challenging biased images and accountability when AI generates them. He is applying for a grant to further examine biases in artificial intelligence.
View Full Article

Comments (0)

Be the first to comment!