Despite an agreement last month by 20 tech companies, including OpenAI, Microsoft, and Stability AI, to prevent deceptive AI content from interfering with global elections, the CCDH found that the AI tools generated images in 41% of their tests. The tools were most susceptible to prompts asking for photos depicting election fraud. Midjourney performed the worst, generating misleading images in 65% of the tests. The report also found evidence of people using Midjourney to create misleading political content. In response, Midjourney's founder said updates related to the upcoming U.S. election are coming soon, while Stability AI updated its policies to prohibit disinformation creation or promotion. OpenAI is working to prevent tool abuse, but Microsoft did not respond to a request for comment.
Key takeaways:
- AI image creation tools from companies like OpenAI and Microsoft can be used to create misleading images that could promote election-related disinformation, according to a report by the Center for Countering Digital Hate (CCDH).
- The CCDH used these AI tools to create images such as U.S. President Joe Biden in a hospital bed and election workers destroying voting machines, raising concerns about the spread of false claims.
- Despite OpenAI, Microsoft, and Stability AI being among the 20 tech companies that pledged to prevent deceptive AI content from interfering with elections, the CCDH found that these AI tools generated misleading images in 41% of their tests.
- Midjourney, an AI tool not among the initial signatories of the agreement, performed the worst, generating misleading images in 65% of the tests. The CCDH found evidence that some people are already using Midjourney to create misleading political content.