The issue arises as other tech companies like OpenAI and Meta are developing their own AI voice assistants. There are concerns about the spread of misinformation, especially in the context of the upcoming presidential election. Microsoft has stated that it is working on refining the quality of responses from Copilot, but critics argue that more needs to be done to prevent the spread of false information.
Key takeaways:
- Microsoft's AI-assisted search tool, Copilot, has been found to generate false information, including fabricated statements attributed to Russian President Vladimir Putin regarding the death of Alexei Navalny.
- The AI tool, which is embedded across Microsoft products, typically links to news stories, giving users the impression that the information it's sharing is credible.
- Microsoft has warned users that its tool might give "incorrect" information and that they should check their facts, but it makes no such caveats when using the tool itself.
- This issue raises concerns about the potential for AI tools to contribute to a misinformation feedback loop, particularly in the context of the upcoming presidential election.