The Taylor Swift deepfakes appeared on January 24 and were quickly flagged by users, leading to the suspension or restriction of several accounts. The images may have originated from a group on Telegram. The term "deepfake" was coined in 2017 by a Reddit user who used Google's face-swapping technology to create pornographic videos. In response to the backlash, tech companies like Google and Meta now require political ads to disclose the use of AI-generated content. Several U.S. states and countries like China and South Korea have also introduced legislation to regulate deepfakes.
Key takeaways:
- Last week, explicit AI-generated photos of Taylor Swift flooded a social media platform, with one image garnering over 45 million views and 24,000 reposts before its removal.
- Deepfakes use machine learning to produce artificial images, videos, or audio that can be used to spread misinformation or influence behavior.
- While some deepfakes are used for beneficial purposes, such as creating AI tutors for education platforms, many are used for nonconsensual pornography or to spread false information.
- In response to the rise of deepfakes, tech companies like Google and Meta have mandated written disclosures for political ads using AI-generated content, and several states and countries have introduced legislation to regulate or criminalize the creation and use of deepfakes.