The use of deepfakes in political campaigns has been on the rise, with instances such as the Republican National Committee releasing an entirely AI-generated ad and Florida GOP Gov. Ron DeSantis's campaign using AI-generated images in an attack ad. Advocacy group Public Citizen argues that deepfakes are a form of fraudulent misrepresentation and should be regulated. However, even if the FEC decides to ban AI deepfakes in campaign ads, it wouldn't cover all potential threats they pose to elections, such as misleading content created and disseminated by individual social media users.
Key takeaways:
- The Federal Election Commission (FEC) is considering regulating AI-generated deepfakes in political ads ahead of the 2024 election to protect voters from election disinformation.
- Several campaigns, including that of Florida GOP Gov. Ron DeSantis, are already using AI-generated deepfakes to persuade voters.
- The FEC's decision to regulate these ads will not be made until after a 60-day public comment window, and there is debate over whether the agency has the authority to regulate deepfake ads.
- Even if the FEC decides to regulate AI deepfakes in campaign ads, it would not cover all potential threats posed by this technology, such as misleading content created and disseminated by individual social media users or outside groups like PACs.