Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Top AI image generators are getting trained on thousands of illegal pictures of child sex abuse, Stanford Internet Observatory says

Dec 20, 2023 - fortune.com
A report from the Stanford Internet Observatory has revealed that thousands of images of child sexual abuse are hidden within the foundations of popular artificial intelligence (AI) image-generators. The report found over 3,200 such images in the AI database LAION, which has been used to train leading AI image-makers like Stable Diffusion. In response, LAION has temporarily removed its datasets and is working to improve its filters to detect and remove illegal content.

The report also highlights that many AI projects were rushed to market without rigorous attention to the potential for misuse. It calls for anyone using training sets from LAION to delete them or clean the material. The report also suggests that AI models that are being misused could be tracked and taken down using unique digital signatures, similar to the method currently used for videos and images.

Key takeaways:

  • The Stanford Internet Observatory found over 3,200 images of suspected child sexual abuse in the AI database LAION, which has been used to train leading AI image-makers.
  • LAION, the Large-scale Artificial Intelligence Open Network, responded by temporarily removing its datasets and stated it has a zero tolerance policy for illegal content.
  • Many AI projects have been rushed to market, leading to issues like this, according to David Thiel, chief technologist at the Stanford Internet Observatory.
  • The Stanford Internet Observatory is calling for drastic measures, such as deleting training sets derived from LAION or working with intermediaries to clean the material, and making older versions of certain AI models disappear from the internet.
View Full Article

Comments (0)

Be the first to comment!