Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

The Taylor Swift deepfake debacle was frustratingly preventable | TechCrunch

Jan 30, 2024 - techcrunch.com
The Elon Musk-owned platform, formerly known as Twitter, faced backlash after AI-generated explicit deepfake images of Taylor Swift went viral. The platform's inability to quickly identify and remove abusive content led to the images being viewed over 45 million times. The White House, TIME Person of the Year, and Swift's fanbase expressed their anger, leading to the platform temporarily banning the search term "Taylor Swift". Critics argue that this response was inadequate and highlights the platform's failure in content moderation.

Dr. Carolina Are, a fellow at Northumbria University’s Centre for Digital Citizens, suggests that social platforms need a complete overhaul of their content moderation systems. She recommends more transparency with users regarding account decisions and faster, more personalized responses to reported issues. The article also highlights the responsibility of companies creating AI products, as the deepfake images were generated using Microsoft Designer, which uses Open AI’s DALL-E 3.

Key takeaways:

  • The Elon Musk-owned platform, formerly known as Twitter, faced backlash after AI-generated, pornographic deepfake images of Taylor Swift went viral, with the platform lacking the infrastructure to quickly identify and remove such abusive content.
  • Taylor Swift's fanbase attempted to flood search results to make it harder to find the images, while the platform responded by temporarily banning the search term 'taylor swift', a move criticized as ineffective.
  • Dr. Carolina Are, a fellow at Northumbria University’s Centre for Digital Citizens, argues that social platforms need a complete overhaul of how they handle content moderation and should be more transparent with users about decisions regarding their accounts or reports.
  • The deepfake images of Swift were traced back to a Telegram group using Microsoft Designer, which draws from Open AI’s DALL-E 3 to generate images. Microsoft has since addressed this loophole, but the incident highlights the need for companies to be held accountable for the safety of their products and their responsibility to disclose known risks to the public.
View Full Article

Comments (0)

Be the first to comment!