The biases are likely due to inadequate training data and testing processes. In response to these findings, Meta stated that it is working on improving its generative AI technology. The company emphasized the importance of not perpetuating stereotypes and ensuring diverse representation in AI tools. However, it did not provide specific details about how it plans to address these issues.
Key takeaways:
- Meta's AI chatbot, Meta AI, has been found to exhibit cultural biases, particularly towards Indian men, often adding turbans to their images disproportionately.
- Despite the diversity in settings and clothing, all the men were generated wearing turbans, which is not representative of the actual ratio of turban-wearing men in India.
- Meta AI also showed hints of class bias and a lack of diverse representation in the dataset for India, often generating stereotypical images for given prompts.
- Meta has responded to these findings by stating that they are working on improving their generative AI technology, but did not provide much detail about the process.