The report also highlighted the potential for AI to be used in fraud, scams, and manipulation, especially in disrupting elections. In the past three months, OpenAI banned accounts linked to five covert influence operations, including Russia’s Doppelganger and China's Spamouflage. Both operations used OpenAI tools to generate multilingual comments posted across social media sites. OpenAI also disrupted a campaign traced back to a political marketing firm in Tel Aviv, which used AI to generate and edit articles and comments posted across various platforms. Despite the challenges, OpenAI emphasized the need to remain vigilant against such influence operations.
Key takeaways:
- OpenAI has taken down influence operations tied to Russia, China, and Iran, which were using artificial intelligence tools like ChatGPT to manipulate public opinion.
- These operations used AI to generate social media comments, create fake accounts, and produce images, raising concerns about the potential for AI fakes to disrupt elections.
- Despite the use of AI, these operations struggled to gain significant traction or reach large audiences, with some of the engagement coming from users identifying them as fake.
- OpenAI warns that while AI does offer some benefits to threat actors, it doesn't solve the main challenge of distribution, and companies must remain vigilant against influence operations.