The article highlights concerns about the potential for these AI tools to reinforce harmful beauty standards and body image distress. It also discusses the technical challenges and costs associated with addressing these biases. The biases in the AI models are largely a result of the data they are trained on, which is often scraped from the internet and heavily weighted towards the perspectives of people in the U.S. and Europe. The article concludes by noting the increasing use of these AI tools in industries such as advertising and the potential for them to undo progress on depicting diversity in popular culture.
Key takeaways:
- AI-generated images are shaping cultural norms, particularly in the depiction of female beauty. The Washington Post found that leading image tools like DALL-E, Midjourney, and Stable Diffusion often generate images of thin, light-skinned women, reinforcing narrow standards of attractiveness.
- AI artist Abran Maldonado criticized these tools for their lack of diversity, stating that derogatory terms were needed to generate images of Black women with larger bodies. He expressed concern that the increasing commercial use of these tools could reverse progress in depicting diversity in popular culture.
- OpenAI, the maker of DALL-E, acknowledged the tool's bias towards "stereotypical and conventional ideals of beauty" and warned that it could reinforce harmful views on body image. The company is working to address these biases, but faces technical challenges in diversifying gender norms and body types.
- The data used to train these AI models is often scraped from the internet, leading to biases in the images generated. This data does not include material from China or India, the largest demographics of internet users, making it heavily weighted towards the perspective of people in the U.S. and Europe.