The spread of AI-generated photos has raised concerns at the White House, leading to calls for legislative measures to address the misuse of AI technology on social media platforms. In response, some US politicians are pushing for new laws to criminalize the creation of deepfake images. Currently, there are no federal laws governing the sharing or creation of deepfake images, but efforts are underway at the state level to combat this emerging challenge.
Key takeaways:
- X has restricted searches for Taylor Swift in response to a surge in the circulation of graphic AI-generated content featuring the artist.
- Teams at X are working to remove all identified images and take necessary actions against the accounts responsible for posting them.
- The White House has expressed concern over the spread of AI-generated photos, emphasizing the disproportionate impact on women and girls, and calling for legislative measures to address the misuse of AI technology on social media platforms.
- Despite the escalating issue, there are currently no federal laws governing the sharing or creation of deepfake images, but efforts are underway at the state level and some US politicians are pushing for new laws that criminalize the creation of deepfake images.