Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes

Dec 14, 2024 - gizmodo.com
A new study by the American Sunlight Project (ASP) reveals that over two dozen members of Congress, predominantly women, have been targeted by sexually explicit deepfakes, highlighting a significant gender disparity in this technology. The study found more than 35,000 mentions of nonconsensual intimate imagery (NCII) involving 26 members of Congress on deepfake websites. While most of the imagery was removed after researchers alerted the affected members, the study emphasizes the challenges faced by individuals without the resources of Congress members in achieving similar outcomes. The study also notes that women in Congress are 70 times more likely than men to be targeted, and the issue poses a threat to democracy and free speech, as many women self-censor to avoid online harassment.

The ASP is advocating for federal legislation to address this issue, with proposed bills like the DEFIANCE Act and the Take It Down Act aiming to establish penalties for creating and sharing such imagery. Despite bipartisan support in the Senate, these bills face challenges in the House due to concerns about free speech and harm definitions. The study underscores the urgent need for action, as AI-generated NCII not only affects public figures but also poses broader risks to national security and discourages women from participating in politics. In the absence of legislative action, the White House is working with the private sector on solutions, though there is skepticism about Big Tech's ability to self-regulate effectively.

Key takeaways:

  • More than two dozen members of Congress, primarily women, have been targeted by sexually explicit deepfakes, highlighting a significant gender disparity in the use of this technology.
  • The American Sunlight Project (ASP) found over 35,000 mentions of nonconsensual intimate imagery involving 26 members of Congress on deepfake websites, with women being 70 times more likely to be targeted than men.
  • There is currently no federal law imposing criminal or civil penalties for creating and distributing AI-generated nonconsensual intimate imagery, though some states have enacted civil penalties.
  • Legislation such as the DEFIANCE Act and the Take It Down Act, which aim to address these issues, have passed the Senate but face challenges in the House regarding free speech and harm definitions.
View Full Article

Comments (0)

Be the first to comment!