In response to the deepfake images, Taylor Swift is reportedly considering legal action against the adult site hosting the AI-generated explicit photos without her consent. Social media platforms, including Facebook, Instagram, and Reddit, are actively removing the posts, with Meta, the parent company of Facebook and Instagram, stating that the content violates their policies. The incident highlights the global challenges of addressing the unauthorized creation and dissemination of explicit AI-generated content.
Key takeaways:
- Microsoft CEO Satya Nadella has expressed concern over the spread of nonconsensual AI-generated explicit images, specifically those featuring Taylor Swift, calling it 'alarming and terrible.'
- Nadella emphasizes the need for global societal norms, collaboration between law enforcement, tech platforms, and legal frameworks to address the issue of fake explicit content.
- The controversy is linked to a report from 404 Media, which suggests that a Microsoft Designer image generator was used to create the explicit images, despite its theoretical refusal to produce images of famous individuals.
- Taylor Swift is reportedly considering legal action against the adult site responsible for hosting the AI-generated explicit photos without her consent, and social media platforms are actively removing the posts.