In response to the growing issue, some states have updated existing revenge porn laws or expanded privacy laws to include AI-generated images. However, these laws typically do not specify how schools should discipline students involved in these incidents. The issue has also prompted discussions about federal solutions, with several pieces of legislation designed to limit deepfakes currently stalled in Congress due to disagreement over who should be held responsible.
Key takeaways:
- Deepfake images, generated by AI, are becoming a disturbingly common occurrence in schools, causing significant distress and career impact for educators like Angela Tipton, who had a lewd image of her circulated among her students.
- Twenty states have passed laws penalizing the dissemination of nonconsensual AI-generated pornographic materials, but the response to such incidents in schools varies widely depending on the state, and federal legislation has not advanced due to disagreements over who should be held responsible.
- Some states, like Indiana, have expanded their revenge porn laws to include AI-generated images, while others have expanded privacy laws. However, these laws typically do not specify how schools should discipline students when these incidents occur.
- The Education Department has issued a new Title IX rule requiring schools to address online sex-based harassment, including nonconsensual distribution of AI-altered intimate images. The White House Task Force to Address Online Harassment and Abuse has also released a report outlining prevention, support, and accountability efforts for government agencies combating image-based sexual abuse.