The report also highlights that many AI projects were rushed to market without rigorous attention to the potential for misuse. It calls for anyone using training sets from LAION to delete them or clean the material. The report also suggests that AI models that are being misused could be tracked and taken down using unique digital signatures, similar to the method currently used for videos and images.
Key takeaways:
- The Stanford Internet Observatory found over 3,200 images of suspected child sexual abuse in the AI database LAION, which has been used to train leading AI image-makers.
- LAION, the Large-scale Artificial Intelligence Open Network, responded by temporarily removing its datasets and stated it has a zero tolerance policy for illegal content.
- Many AI projects have been rushed to market, leading to issues like this, according to David Thiel, chief technologist at the Stanford Internet Observatory.
- The Stanford Internet Observatory is calling for drastic measures, such as deleting training sets derived from LAION or working with intermediaries to clean the material, and making older versions of certain AI models disappear from the internet.