According to OpenAI's principal investigator Ben Nimmo, this is the first time the company has uncovered such an AI tool. The surveillance tool's code was reportedly based on an open-source version of Meta's Llama models. Additionally, the group used ChatGPT to generate an end-of-year performance review, claiming to have written phishing emails for clients in China. This discovery highlights how threat actors might inadvertently reveal their activities through their use of AI models.
Key takeaways:
- OpenAI banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool.
- The tool was designed to monitor anti-Chinese sentiment on platforms like X, Facebook, YouTube, and Instagram, with a focus on spotting calls for protests against human rights violations in China.
- The operation used ChatGPT accounts that operated during Chinese business hours and prompted the models in Chinese, suggesting manual prompting rather than automation.
- The surveillance tool's code was based on an open-source version of one of Meta's Llama models, and ChatGPT was also used to generate phishing emails for clients in China.