Several AI companies, including Google, OpenAI, and Microsoft, have signed an accord promising to create new ways to mitigate the deceptive use of AI in elections. The accord includes seven principle goals such as research and deployment of prevention methods, providing provenance for content, improving AI detection capabilities, and evaluating the effects of misleading AI-generated content. As the 2024 presidential race intensifies, these companies will be tested on the safeguards they have implemented and the commitments they have made.
Key takeaways:
- AI companies like Google, OpenAI, and Microsoft are implementing measures to handle misinformation and misuse of their AI tools during the upcoming US presidential election.
- Google's Gemini will refuse to answer election-related questions, instead referring users to Google Search. OpenAI's ChatGPT will refer users to CanIVote.org for voting information and has updated rules to forbid impersonation and misrepresentation of the voting process.
- Microsoft is working on improving the accuracy of its chatbot's responses after a report found it gave false information about elections. The company also plans to release regular reports on foreign influences in key elections.
- Several AI companies signed an accord promising to create new ways to mitigate the deceptive use of AI in elections, agreeing on seven principle goals including research and deployment of prevention methods, giving provenance for content, improving AI detection capabilities, and evaluating the effects of misleading AI-generated content.