Despite Microsoft's plans to combat disinformation ahead of the 2024 elections, the researchers claim that the issues with Copilot are systemic and not limited to specific elections or regions. The chatbot was found to be most accurate in English, but even then, only 52% of answers were free of evasion or factual error. The researchers warn that the spread of misinformation by AI chatbots could pose a significant threat to democratic processes.
Key takeaways:
- Microsoft's AI chatbot, Copilot, has been found to respond to political queries with misinformation, outdated information, and conspiracy theories, according to research by AI Forensics and AlgorithmWatch.
- The research found that a third of the answers given by Copilot contained factual errors, and the tool was deemed an unreliable source of information for voters.
- Microsoft has acknowledged the issue and stated that they are taking steps to address these issues and prepare their tools for the 2024 elections.
- Experts warn that the rapid development of generative AI poses threats to high-profile elections, as they could be used to spread disinformation on an unprecedented scale.