Despite the proliferation of nonconsensual, sexually exploitative AI deep fakes online, few legal protections exist for the victims. Only a handful of states have laws prohibiting the dissemination of such material, and while major social platforms ban this content, it can still slip through. Meanwhile, European lawmakers are working towards ratifying an AI Code of Conduct to address these issues.
Key takeaways:
- The Attorneys General in all 50 U.S. states and 4 territories have signed a letter urging Congress to take action against AI-enabled child sexual abuse material (CSAM).
- The letter expresses concern that AI is creating a new frontier for abuse that makes prosecution more difficult, particularly with the creation of deep fake images.
- The signatories are pushing for Congress to establish a committee to research solutions to address the risks of AI-generated CSAM, and to expand existing laws against CSAM to explicitly cover AI-generated CSAM.
- While some states have laws against the dissemination of sexually exploitative AI deepfakes, and major social platforms prohibit this content, it can still slip through the cracks. European lawmakers are also working to ratify an AI Code of Conduct.