1
Feature Story
Why does AI have to be nice? Researchers propose ‘Antagonistic AI’
Feb 28, 2024 · venturebeat.com
The researchers identified three types of antagonism: adversarial, argumentative, and personal, and provided techniques to implement these into AI. However, they emphasized that antagonistic AI should still be responsible and ethical, and users should have the option to opt-in and have an emergency stop option. They argue that antagonistic AI can reflect the full range of human values and can challenge entrenched cultural norms.
Key takeaways
- Researchers from MIT and the University of Montreal have introduced the concept of Antagonistic AI, which are AI systems that are purposefully combative, critical, and rude.
- Antagonistic AI can be beneficial in many areas such as building resilience, promoting personal or collective growth, facilitating self-reflection and enlightenment, and fostering social bonding.
- The researchers identify three types of antagonism: adversarial, argumentative, and personal, and provide several techniques to implement antagonistic features into AI.
- Despite the antagonistic nature, the researchers emphasize the need for responsible and ethical AI, suggesting that building responsible antagonistic AI should be based on consent, context, and framing.