Critics argue that these platforms are applying a double standard, as they have historically enforced strict rules against real human sex workers while allowing AI-generated sexual content. Both Meta and TikTok have increased their removal of such ads after being contacted by NBC News, but questions remain about how these ads bypassed their filters in the first place. The situation highlights the challenges these platforms face in moderating content in the era of AI and the potential for inconsistent enforcement of their policies.
Key takeaways:
- Facebook, Instagram, and TikTok have been struggling to control the rise of sexually explicit advertisements for AI-powered chatbots, despite their efforts to limit sexualized content.
- These ads often feature sexualized female characters and use popular children's TV characters or anime-style images to promote not-safe-for-work experiences.
- Researchers argue that there is a double standard in the enforcement of these platforms' policies, as real human sex workers are not allowed to make money off their image, but AI-generated content is permitted.
- Despite the platforms removing some of these ads, questions remain about how they were able to bypass the filters in the first place, and the enforcement of these policies appears inconsistent.