Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Seeking Reliable Election Information? Don’t Trust AI

Feb 28, 2024 - proofnews.org
A study conducted by the AI Democracy Projects found that leading AI text models, including OpenAI’s GPT-4, Google’s Gemini, and Meta’s Llama 2, provided inaccurate information when asked about election-related queries. The models were tested on 26 questions that voters might ask, and the responses were rated for bias, accuracy, completeness, and harmfulness. The AI models performed poorly on accuracy, with about half of their collective responses being ranked as inaccurate by a majority of testers.

The findings raise questions about the utility of AI models in providing accurate election information and how companies are complying with their own pledges to promote information integrity and mitigate misinformation. The study also highlighted the potential harms that could occur when voters use these and similar new technologies to seek election information. The AI Democracy Projects hopes the study will help begin mapping the landscape of these potential harms.

Key takeaways:

  • AI models from leading companies including OpenAI, Google, and Meta were tested by the AI Democracy Projects and found to provide inaccurate and misleading information about voting processes and regulations.
  • None of the AI models correctly stated that campaign attire, such as a MAGA hat, would not be allowed at the polls in Texas, raising concerns about the utility of these models for the public.
  • Overall, the AI models performed poorly on accuracy, with about half of their collective responses being ranked as inaccurate by a majority of testers. More than one-third of responses were rated as incomplete and/or harmful.
  • The findings raise questions about how the companies are complying with their own pledges to promote information integrity and mitigate misinformation during this presidential election year.
View Full Article

Comments (0)

Be the first to comment!