In the US, the FBI has warned about deepfakes being used for extortion, and attorney generals from every state have urged Congress to take action against the increase of AI-generated child sexual abuse material. Experts have noticed a rise in the dissemination of such material, which is hampering efforts to identify victims. It is unclear if upcoming regulations will help curb the spread of disturbing deepfake images.
Key takeaways:
- AI-generated, deepfake nudes of a 14-year-old girl and other female classmates were circulated online across four schools in Spain, causing distress among the victims and their parents.
- The local Juvenile Prosecutor’s Office is handling the investigation, with 20 victims impacted so far and a group of suspected culprits behind the deepfakes already identified.
- The deepfake app used is free and accessible to anyone with a smartphone, indicating a dystopian use of AI tech that could continue to cause problems in schools and elsewhere.
- Attorney generals from every single state in the US have sent a letter to Congress, urging action against the increase of AI-generated child sexual abuse material (CSAM), as the FBI continues to receive reports from victims whose photos or videos were altered into explicit content.