The researchers believe this scalable solution could help combat the toxic culture prevalent in online discussions. They suggest that AI, when used properly, can foster empathetic and respectful conversations across various digital platforms. The study was published in the journal PNAS.
Key takeaways:
- Researchers from Brigham Young University and Duke University have developed an AI that can moderate online chats, improving their quality and promoting civility by suggesting rephrasings for user comments.
- The AI does not alter the content of the comments, but provides options for the user to make a more polite statement.
- In a study, participants accepted AI-suggested rephrasings two-thirds of the time, and those who implemented one or more AI rephrasing suggestions reported significantly higher conversation quality and were more open to opposing views.
- The researchers believe this AI intervention could be a scalable solution to combat the toxic online culture and foster more empathetic and respectful discussions.