The incident has drawn attention from the White House, which termed the spread of AI-generated photos as "alarming". White House press secretary Karine Jean-Pierre stressed the need for legislation to address the misuse of AI technology on social media. US politicians are also advocating for new laws to criminalize the creation of deepfake images. Currently, there are no federal laws against the sharing or creation of deepfake images, although some states have taken steps to address the issue.
Key takeaways:
- Social media platform X, formerly Twitter, has temporarily blocked searches for Taylor Swift after explicit AI-generated images of the singer began circulating on the site.
- The fake images of Swift went viral and were viewed millions of times, causing concern among US officials and the singer's fans.
- The White House has called the spread of the AI-generated photos 'alarming' and suggested that there should be legislation to tackle the misuse of AI technology on social media.
- There are currently no federal laws against the sharing or creation of deepfake images in the US, but the UK made the sharing of deepfake pornography illegal as part of its Online Safety Act in 2023.