Governments and businesses are taking steps to address the issue of explicit AI-generated images. The UK government plans to criminalize the creation and sharing of sexually explicit deepfakes, while the US is considering the bipartisan Take It Down Act to criminalize non-consensual, sexually exploitative images, including AI-generated deepfakes. Major tech companies have pledged to prevent their AI products from generating non-consensual deepfake pornography and child sexual abuse material. Despite these efforts, the demand for such content persists, as evidenced by Fowler's discovery.
Key takeaways:
- Jeremiah Fowler discovered an unprotected AWS S3 bucket containing explicit AI-generated images, including those depicting children and celebrities as children, linked to South Korean AI company AI-NOMIS and its app GenNomis.
- The exposed data included 93,485 images and JSON files with user prompts, highlighting the potential for abuse and lack of enforcement of guidelines prohibiting illegal content creation.
- After Fowler reported the breach, the images were secured, and the websites of GenNomis and AI-NOMIS went offline without any communication from the developers.
- Governments and tech companies are taking steps to address the issue of explicit AI-generated images, with laws being proposed and pledges made to prevent the creation and distribution of non-consensual deepfake pornography.