The IWF's findings are based on a month-long investigation into a child abuse forum on the dark web. The watchdog found that more than one in five of the illegal images were classified as category A, the most serious kind of content, depicting rape and sexual torture. The UK government has stated that AI-generated child sexual abuse material will be covered by the upcoming online safety bill, which will require social media companies to prevent such content from appearing on their platforms.
Key takeaways:
- The Internet Watch Foundation (IWF) has warned that artificial intelligence-generated child sexual abuse images are threatening to overwhelm the internet, with nearly 3,000 such images breaking UK law found in their investigations.
- The AI technology is being used to create new depictions of real-life abuse victims and to create images of celebrities who have been “de-aged” and then depicted as children in sexual abuse scenarios.
- The IWF fears that the proliferation of AI-generated child sexual abuse material (CSAM) will distract law enforcement agencies from detecting real abuse and helping victims.
- The UK government has stated that AI-generated CSAM will be covered by the online safety bill, which is due to become law soon, and will require social media companies to prevent such content from appearing on their platforms.