The investigation also highlighted Sora's challenges in representing diverse relationships and family dynamics, often defaulting to homogeneous portrayals. Researchers noted a "stock image" aesthetic in Sora's videos, suggesting limitations in its training data or fine-tuning processes. Addressing these biases requires more than technical solutions, with experts advocating for greater disciplinary diversity and real-world testing to understand societal risks. As OpenAI expands Sora's availability, there may be increased motivation to tackle these issues, especially given the commercial implications of biased AI outputs.
Key takeaways:
- AI-generated videos, like those from OpenAI's Sora, exhibit biases, perpetuating sexist, racist, and ableist stereotypes.
- Sora's model often portrays people in stereotypical roles and appearances, such as men in leadership positions and women in caregiving roles.
- The system struggles with diversity in relationships and often defaults to portraying people as young, attractive, and able-bodied.
- Addressing AI bias requires more than technical solutions; it needs interdisciplinary collaboration and diverse perspectives in development and testing.