The article also points out the lack of sufficient content moderation policies in popular AI text-to-image generators, which could allow for the creation and spread of false and misleading information. It suggests that both short-term and long-term solutions are needed, including strengthening content moderation policies, promoting media literacy, and using AI to tackle AI-generated content. The article concludes by emphasizing the need to prepare for the start of a new era in electoral misinformation and disinformation.
Key takeaways:
- Artificial Intelligence (AI) is already being used in politics and elections, with a high risk that it could put the integrity of the election process into question in 2024.
- Content moderation policies of popular AI text-to-image generators are currently insufficient, with over 85% of prompts related to known misleading or false narratives being accepted.
- Despite varying image quality, the ease of creating and spreading false and misleading information using these AI tools is a significant concern.
- There is an urgent need for stronger content moderation policies, proactive action from social media companies, and increased media literacy among online users to combat the use of AI in coordinated disinformation campaigns.