The move comes amid concerns over the use of AI in political campaigns, with the Federal Election Commission considering regulations on AI-generated deepfakes in political ads. Despite these measures, experts warn of loopholes and the potential for AI technologies to create convincing false content that could mislead voters. Major social networks are struggling to keep up with the surge of false election-related content, and there are fears that the rise of generative AI technologies could put the integrity of the 2024 elections at risk.
Key takeaways:
- Google has announced new restrictions on political advertisements using artificial intelligence, requiring them to clearly disclose any digitally modified pictures or voices. The regulations will be imposed in November, ahead of the 2024 US presidential election.
- The Federal Election Commission is also considering regulations on AI-generated deepfakes in political advertisements. However, there are loopholes in Google's rule, such as allowing synthetic material that is irrelevant to the claims made in the advertisement.
- Experts have raised concerns about the use of AI-generated content in political campaigns, warning that it can create convincing false content that may mislead voters. Social media sites are particularly vulnerable to the spread of such misinformation.
- Several initiatives are being made to address this issue, including the development of technologies that can provide greater insight into the origin and movement of materials. Companies like Google, Meta, and Microsoft are actively implementing regulations and devising new strategies to tackle the challenges posed by AI-generated content.