Experts are concerned that these chatbots could normalize and mainstream disinformation narratives, potentially radicalizing individuals who already embrace these conspiracies. Adam Hadley, executive director of Tech Against Terrorism, warns that the weaponization of these rudimentary chatbots is a reality, with potential uses ranging from radicalization to the spread of propaganda and misinformation. He emphasizes the need for robust content moderation in generative AI, bolstered by comprehensive legislation.
Key takeaways:
- The far-right social network Gab has launched almost 100 chatbots, including AI versions of Adolf Hitler and Donald Trump, some of which question the reality of the Holocaust.
- Gab launched a new platform, Gab AI, specifically for its chatbots and has quickly expanded the number of “characters” available, with users currently able to choose from 91 different figures.
- These chatbots are seen as potentially dangerous as they could normalize and mainstream disinformation narratives, acting as echo chambers and potentially further radicalizing individuals already embracing these conspiracies.
- Experts stress the need for robust content moderation in generative AI, bolstered by comprehensive legislation, to prevent the spread of propaganda and misinformation.