The findings raise questions about the utility of AI models in providing accurate election information and how companies are complying with their own pledges to promote information integrity and mitigate misinformation. The study also highlighted the potential harms that could occur when voters use these and similar new technologies to seek election information. The AI Democracy Projects hopes the study will help begin mapping the landscape of these potential harms.
Key takeaways:
- AI models from leading companies including OpenAI, Google, and Meta were tested by the AI Democracy Projects and found to provide inaccurate and misleading information about voting processes and regulations.
- None of the AI models correctly stated that campaign attire, such as a MAGA hat, would not be allowed at the polls in Texas, raising concerns about the utility of these models for the public.
- Overall, the AI models performed poorly on accuracy, with about half of their collective responses being ranked as inaccurate by a majority of testers. More than one-third of responses were rated as incomplete and/or harmful.
- The findings raise questions about how the companies are complying with their own pledges to promote information integrity and mitigate misinformation during this presidential election year.