This is not the first time Meta has faced issues with bias in its AI models. Previously, Instagram's auto-translate feature inserted the word "terrorist" into user bios written in Arabic, reminiscent of a Facebook mistranslation that led to the arrest of a Palestinian man in Israel in 2017.
Key takeaways:
- Meta's WhatsApp AI sticker generator has been creating inappropriate and violent imagery, including images of children holding guns when prompted with words like "Palestine".
- According to an unnamed source, some of Meta's employees had flagged this issue, particularly those related to the war in Israel.
- Meta spokesperson Kevin McAlister stated that the company is addressing the issue and will continue to improve these features based on user feedback.
- This is not the first time Meta has faced issues with bias in its AI models. For instance, Instagram's auto-translate feature previously inserted the word "terrorist" into user bios written in Arabic.