Chinese AI models are subject to strict information controls due to a 2023 law prohibiting content that could harm national unity or social harmony. To comply, companies like DeepSeek implement censorship through prompt-level filters or fine-tuning. The original R1 model already refused to answer 85% of politically controversial questions, and R1-0528 continues this trend. Despite sometimes acknowledging human rights abuses, the model frequently echoes the Chinese government's official stance. This has drawn criticism, including concerns from Western companies about the implications of building on Chinese AI models.
Key takeaways:
- DeepSeek's updated AI model, R1-0528, achieves high scores on benchmarks but is more censored on contentious topics, especially those sensitive to the Chinese government.
- The model is less willing to answer questions about topics the Chinese government considers controversial, such as the internment camps in Xinjiang.
- Chinese AI models are required to follow strict information controls due to a 2023 law that forbids generating content that could harm national unity and social harmony.
- There is concern about the implications of Western companies building on top of Chinese AI models that are subject to censorship.