The report raises concerns about the role of AI in spreading election misinformation, with most adults in the U.S. fearing that AI tools will increase the spread of false information during elections. Despite voluntary pledges by tech companies to prevent their AI tools from being used to spread election misinformation, the report found high rates of wrong answers from the chatbots. The findings highlight the need for regulation and oversight of AI in politics, as well as the importance of ensuring that AI tools are trained on accurate and up-to-date information.
Key takeaways:
- Popular AI-powered chatbots are generating false and misleading information that threatens to disenfranchise voters, according to a report by AI experts and a bipartisan group of election officials.
- Chatbots like GPT-4 and Google’s Gemini are prone to suggesting voters head to polling places that don’t exist or inventing illogical responses based on rehashed, dated information.
- Workshop participants rated more than half of the chatbots’ responses as inaccurate and categorized 40% of the responses as harmful, including perpetuating dated and inaccurate information that could limit voting rights.
- Major technology companies have signed a largely symbolic pact to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to generate increasingly realistic AI-generated images, audio and video, including material that provides “false information to voters about when, where, and how they can lawfully vote.”