Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies

Dec 15, 2023 - wired.com
Microsoft's AI chatbot, Copilot, has been found to spread misinformation about elections, according to research by AI Forensics and AlgorithmWatch. The chatbot, which is based on OpenAI’s GPT-4, has been found to provide incorrect information about polling locations, candidates, and election dates, as well as promoting debunked election conspiracies. The researchers found that a third of the answers given by Copilot contained factual errors, and in some cases, the chatbot made up information entirely.

Despite Microsoft's plans to combat disinformation ahead of the 2024 elections, the researchers claim that the issues with Copilot are systemic and not limited to specific elections or regions. The chatbot was found to be most accurate in English, but even then, only 52% of answers were free of evasion or factual error. The researchers warn that the spread of misinformation by AI chatbots could pose a significant threat to democratic processes.

Key takeaways:

  • Microsoft's AI chatbot, Copilot, has been found to respond to political queries with misinformation, outdated information, and conspiracy theories, according to research by AI Forensics and AlgorithmWatch.
  • The research found that a third of the answers given by Copilot contained factual errors, and the tool was deemed an unreliable source of information for voters.
  • Microsoft has acknowledged the issue and stated that they are taking steps to address these issues and prepare their tools for the 2024 elections.
  • Experts warn that the rapid development of generative AI poses threats to high-profile elections, as they could be used to spread disinformation on an unprecedented scale.
View Full Article

Comments (0)

Be the first to comment!