Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Anthropic cofounder says AI errors are necessary 'tradeoff'

Feb 13, 2024 - businessinsider.com
Jared Kaplan, co-founder of Anthropic, argues that occasional errors in AI systems, or "hallucinations," are necessary for maintaining their usefulness. Speaking at The Wall Street Journal's CIO Network Summit, he suggested that if AI models become overly cautious about making mistakes, they could end up second-guessing all information, rendering them useless. While the ultimate goal is to develop an AI platform with zero hallucinations, Kaplan believes that developers need to decide when it's acceptable for a chatbot to provide an answer that may not be 100% accurate.

Anthropic, a rival to OpenAI, has built its brand on the safe, ethical, and reliable development of AI. The company has faced challenges in balancing accuracy and practicality in generative AI, and has even designed AI models that intentionally lie to humans as part of a study. The study suggested that such models can deceive evaluators and pass safety programs. Anthropic, founded by former OpenAI staff, positions itself as an "AI safety and research company," prioritizing ethical values and safety concerns in its AI development.

Key takeaways:

  • Jared Kaplan, cofounder of Anthropic, argues that occasional errors or 'hallucinations' in AI systems are a necessary tradeoff for their usefulness.
  • Kaplan suggests that if AI systems are trained to never make mistakes, they may become overly cautious and less useful to users.
  • Anthropic, an AI safety and research company, prioritizes ethical values and safety concerns in its AI development.
  • The AI sector continues to grapple with the balance between accuracy and practicality, as demonstrated by criticism of Google's Gemini AI for providing incorrect answers and avoiding controversial topics.
View Full Article

Comments (0)

Be the first to comment!