Albert Zhang, a cybersecurity analyst, suggested that Gemini's bias could be due to the data used to train the AI, which likely contained Chinese text created by the Chinese government's propaganda system. Google claims that Gemini is designed to offer neutral responses and is constantly working to improve this. However, the company declined to comment on the Chinese language data used to train Gemini. Lawmakers have expressed concern over the potential misuse of AI for disinformation and have urged tech companies to improve AI training and thoroughly test these models before publishing them.
Key takeaways:
- Google's AI assistant, Gemini, has been found to parrot Beijing's official positions when asked about problems in the United States and Taiwan, and remains silent on sensitive topics related to China's human rights abuses and COVID policies.
- Experts suggest that the pro-Beijing responses from Gemini could be a result of the data used to train the AI assistant, which likely contains Chinese text created by the Chinese government's propaganda system.
- US lawmakers have expressed concerns over these findings, urging Google and other Western companies to improve AI training and be more transparent about their AI training data.
- Google, which had quit the Chinese market in 2010 over censorship demands, has faced criticism in the past for its handling of Chinese-related content and data, including a cancelled project for a search engine tailored for the Chinese market.