Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Why does AI have to be nice? Researchers propose ‘Antagonistic AI’

Feb 28, 2024 - venturebeat.com
Researchers from MIT and the University of Montreal have proposed the concept of Antagonistic AI, which is AI that is intentionally combative, critical, and rude. They argue that current large language models (LLMs) are overly agreeable and sanitized, often refusing to take strong positions, which can lead to user dissatisfaction. The researchers suggest that antagonistic AI could help people by challenging them, training resilience, and providing catharsis.

The researchers identified three types of antagonism: adversarial, argumentative, and personal, and provided techniques to implement these into AI. However, they emphasized that antagonistic AI should still be responsible and ethical, and users should have the option to opt-in and have an emergency stop option. They argue that antagonistic AI can reflect the full range of human values and can challenge entrenched cultural norms.

Key takeaways:

  • Researchers from MIT and the University of Montreal have introduced the concept of Antagonistic AI, which are AI systems that are purposefully combative, critical, and rude.
  • Antagonistic AI can be beneficial in many areas such as building resilience, promoting personal or collective growth, facilitating self-reflection and enlightenment, and fostering social bonding.
  • The researchers identify three types of antagonism: adversarial, argumentative, and personal, and provide several techniques to implement antagonistic features into AI.
  • Despite the antagonistic nature, the researchers emphasize the need for responsible and ethical AI, suggesting that building responsible antagonistic AI should be based on consent, context, and framing.
View Full Article

Comments (0)

Be the first to comment!