The findings highlight broader challenges in AI development, such as the tension between creating general models versus culturally specific ones and the difficulty in achieving cultural reasoning. Experts like Chris Russell and Vagrant Gautam suggest that the models' behavior is influenced by the availability of critical content in different languages, with more English-language criticism of the Chinese government available online. Geoffrey Rockwell and Maarten Sap emphasize the need for AI models to better understand socio-cultural norms and the ongoing debates over model sovereignty and influence in the AI community.
Key takeaways:
- AI models developed by Chinese labs like DeepSeek censor politically sensitive topics, with a 2023 measure forbidding content that damages the unity of the country and social harmony.
- Language affects AI model responses, with models like Claude 3.7 Sonnet and Qwen 2.5 72B Instruct being less likely to answer politically sensitive questions in Chinese compared to English.
- Experts suggest that the uneven compliance in different languages may be due to generalization failure, as Chinese training data is more likely to be politically censored.
- The findings highlight ongoing debates in the AI community about model sovereignty, cultural competence, and cross-lingual alignment.