The experiment found that discussions on partisan issues did not necessarily result in clashes when the number of participants on both sides was roughly equal. This suggests that polarization can be reduced if people interact with those who have different political beliefs. The study also indicated that AI models with human-like profiles could potentially lead to more civil discourse on social media.
Key takeaways:
- Petter Törnberg and his team created an AI social experiment where 500 chatbots with distinct personas interacted on a pseudo-Twitter platform.
- The bots operated under three models: 'Echo Chamber', 'Discover', and 'Bridging Algorithm', each fostering different types of engagement and interaction.
- The experiment found that discussions of partisan issues didn't necessarily result in clashes when the numbers of participants on both sides were roughly equal, suggesting a potential for less polarized online discourse.
- Political scientist Lisa Argyle sees potential in these AI models, suggesting they might lead to a more civil social media discourse in the future.