Despite the potential threats, the federal government has yet to take action, making state-level regulations crucial. However, the enforcement of these laws depends on a broad range of parties, including major social media companies. Some of these companies, such as Meta and Microsoft, have begun requiring political ads on their platforms to disclose if they were made with the help of AI. The article concludes by warning of the potential disruptive effects of deepfakes on elections, including swaying close elections and enabling candidates to deny the validity of authentic content.
Key takeaways:
- Only three states enacted laws related to the challenges posed by artificial intelligence and deepfakes in political campaigns in 2023, despite the growing threat of these technologies.
- Minnesota, Michigan, and Washington are the states that have enacted laws to tackle the issue, with legislation falling into two categories: disclosure requirements and bans.
- Experts warn that the 2024 election could be the first 'deepfake election' due to the increasing prevalence of deepfake videos online, which could spread political disinformation.
- Despite the growing threat, the federal government has not yet taken significant action to regulate the use of AI deepfakes in political campaigns, making state-level action particularly important.