Despite concerns about AI interference in elections, Meta found that coordinated networks of accounts looking to spread propaganda or disinformation only made minor gains using generative AI. The company was able to take down these covert influence campaigns by focusing on account behaviors rather than the content they posted. Meta also revealed that it took down around 20 new covert influence operations worldwide to prevent foreign interference. The company highlighted that most of these networks didn't have authentic audiences and often used fake likes and followers to appear more popular.
Key takeaways:
- Meta claims that generative AI had limited impact on spreading propaganda and disinformation during major elections on its platforms, including Facebook, Instagram, and Threads.
- The company's Imagine AI image generator rejected 590,000 requests to create images of key political figures to prevent the creation of election-related deepfakes.
- Meta was able to disrupt around 20 new covert influence operations around the world, focusing on the behaviors of accounts rather than the content they posted.
- Despite these efforts, Meta pointed out that false videos about the U.S. election linked to Russian-based influence operations were often posted on other platforms.