The study calls for the discontinuation of any models built on Stable Diffusion 1.5 that lack proper safeguards. While Stability AI has taken steps to address the issue, the Stanford study found that the AI tool is still partially trained on illegal content and can be misused to produce fake CSAM. The researchers also raised concerns about the legality of models trained on illegal material. The report highlights the need for better regulation and transparency in the AI sector to prevent the misuse of such technology.
Key takeaways:
- Stable Diffusion, a text-to-image generative AI tool from Stability AI, was trained on a large public dataset containing hundreds of known images of child sexual abuse material, according to research from the Stanford Internet Observatory.
- The researchers found more than 3,000 suspected pieces of child sexual abuse material in the public training data, and suspect the actual volume is likely far higher.
- Stability AI has taken steps to address the issue by releasing newer versions of Stable Diffusion that filter out more explicit material from training data and results, but the Stanford study found that the tool is still trained in part on illegal content.
- The Canadian Centre for Child Protection, which validated Stanford’s findings, expressed concern about the lack of care in curating these large datasets, which are exacerbating child sexual abuse material issues that affect every major tech company.