The use of AI in censorship marks a shift from traditional methods that relied on human labor and basic algorithms to block blacklisted terms. This new approach allows for more sophisticated detection of dissent, even subtle criticism, on a large scale. The development is part of a broader trend of authoritarian regimes adopting AI technology for repressive purposes, as evidenced by OpenAI's findings of Chinese entities using LLMs to track anti-government posts and smear dissidents. The Chinese government, however, denies these claims, emphasizing its commitment to ethical AI development.
Key takeaways:
- China has developed an AI system to enhance its censorship capabilities, using a large language model trained on 133,000 examples of sensitive content.
- The AI system is designed to automatically flag content related to sensitive topics such as politics, social issues, and military matters, extending beyond traditional censorship methods.
- The dataset used for training the AI was discovered in an unsecured database, indicating a focus on "public opinion work" aligned with Chinese government goals.
- Authoritarian regimes, including China, are increasingly leveraging AI technology to improve the efficiency and sophistication of state-led information control and censorship.