The issue of deepfake porn and online gender-based violence is a growing concern, particularly in India where deepfaked videos of actresses have surged. While some countries have laws against deepfakes, the production and distribution of AI-generated porn remains largely unregulated. In response to the Oversight Board's cases, Meta stated it uses a mix of AI and human review to detect sexually suggestive content. The board has invited public comments on the matter and will post its decision in the coming weeks.
Key takeaways:
- The Oversight Board, Meta’s policy council, is investigating how Instagram in India and Facebook in the U.S. handled AI-generated explicit images of public figures.
- In both cases, the sites have now removed the media, but not before multiple reports and appeals were made, highlighting potential issues with Meta's moderation processes.
- The board is particularly interested in whether Meta's policies and enforcement practices are effective at addressing the problem of explicit AI-generated content, and whether they are applied fairly across different markets and languages.
- The Oversight Board has sought public comments on the matter, with a focus on the harms caused by deep fake porn, the proliferation of such content in regions like the U.S. and India, and potential pitfalls in Meta's approach to detecting AI-generated explicit imagery.