Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI Bans Chinese Accounts Using ChatGPT To Edit Code For Social Media Surveillance - Slashdot

Feb 22, 2025 - tech.slashdot.org
OpenAI has banned a group of Chinese accounts that were using ChatGPT to develop an AI-powered social media surveillance tool. This initiative, known as Peer Review, involved the group prompting ChatGPT to create sales pitches for a program aimed at monitoring anti-Chinese sentiment on platforms like X, Facebook, YouTube, and Instagram. The operation focused on identifying calls for protests against human rights violations in China, intending to share these insights with Chinese authorities. The accounts operated during Chinese business hours, used Chinese prompts, and engaged with the tools in a manner consistent with manual prompting rather than automation.

According to OpenAI's principal investigator Ben Nimmo, this is the first time the company has uncovered such an AI tool. The surveillance tool's code was reportedly based on an open-source version of Meta's Llama models. Additionally, the group used ChatGPT to generate an end-of-year performance review, claiming to have written phishing emails for clients in China. This discovery highlights how threat actors might inadvertently reveal their activities through their use of AI models.

Key takeaways:

  • OpenAI banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool.
  • The tool was designed to monitor anti-Chinese sentiment on platforms like X, Facebook, YouTube, and Instagram, with a focus on spotting calls for protests against human rights violations in China.
  • The operation used ChatGPT accounts that operated during Chinese business hours and prompted the models in Chinese, suggesting manual prompting rather than automation.
  • The surveillance tool's code was based on an open-source version of one of Meta's Llama models, and ChatGPT was also used to generate phishing emails for clients in China.
View Full Article

Comments (0)

Be the first to comment!