Dr. Carolina Are, a fellow at Northumbria University’s Centre for Digital Citizens, suggests that social platforms need a complete overhaul of their content moderation systems. She recommends more transparency with users regarding account decisions and faster, more personalized responses to reported issues. The article also highlights the responsibility of companies creating AI products, as the deepfake images were generated using Microsoft Designer, which uses Open AI’s DALL-E 3.
Key takeaways:
- The Elon Musk-owned platform, formerly known as Twitter, faced backlash after AI-generated, pornographic deepfake images of Taylor Swift went viral, with the platform lacking the infrastructure to quickly identify and remove such abusive content.
- Taylor Swift's fanbase attempted to flood search results to make it harder to find the images, while the platform responded by temporarily banning the search term 'taylor swift', a move criticized as ineffective.
- Dr. Carolina Are, a fellow at Northumbria University’s Centre for Digital Citizens, argues that social platforms need a complete overhaul of how they handle content moderation and should be more transparent with users about decisions regarding their accounts or reports.
- The deepfake images of Swift were traced back to a Telegram group using Microsoft Designer, which draws from Open AI’s DALL-E 3 to generate images. Microsoft has since addressed this loophole, but the incident highlights the need for companies to be held accountable for the safety of their products and their responsibility to disclose known risks to the public.