This is the first time a major AI company has disclosed how its specific tools were used for online deception. Despite the speculation on the use of generative AI in such campaigns, the company aimed to show the realities of how the technology was changing online deception. However, the campaigns failed to gain much traction and the AI tools did not appear to have expanded their reach or impact.
Key takeaways:
- OpenAI identified and disrupted five online campaigns that used its generative artificial intelligence technologies to manipulate public opinion and influence geopolitics.
- The campaigns were run by state actors and private companies in Russia, China, Iran and Israel, and used OpenAI’s technology to generate social media posts, translate and edit articles, write headlines and debug computer programs.
- This is the first time a major A.I. company has revealed how its specific tools were used for online deception, according to social media researchers.
- Despite the use of A.I. tools, the campaigns failed to gain much traction and did not appear to have expanded their reach or impact.