Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Why does AI hallucinate?

Jun 18, 2024 - technologyreview.com
The article discusses the issue of "hallucination" in large language models, where AI chatbots like SARAH and Galactica generate incorrect or fabricated information. Despite their ability to provide services in multiple languages and around the clock, these chatbots have been found to produce fake names, addresses, academic papers, and even legal citations. This tendency to generate false information is a significant obstacle to their widespread adoption, as it makes them difficult to trust.

The article explains that these models are designed to generate responses from scratch, using billions of numbers to calculate new sequences of words. While some believe that training these models on more text will reduce their error rate, others suggest techniques like chain-of-thought prompting to increase accuracy. However, as long as these models are probabilistic, there will always be an element of chance in their outputs. The article concludes by suggesting that managing expectations about these tools might be the best solution to the problem.

Key takeaways:

  • Large language models like SARAH, backed by GPT-3.5, are designed to generate responses from scratch, but they often 'hallucinate' or make things up, leading to inaccurate information.
  • The World Health Organization and other entities have warned about the inaccuracies of these chatbots, which have resulted in issues like fake names for nonexistent clinics and invented refund policies.
  • While some researchers believe that training these models on more text will reduce their error rate, others are exploring techniques like chain-of-thought prompting and fact-checking to increase accuracy.
  • Despite improvements, the probabilistic nature of these models means there will always be an element of chance in what they produce, and the more accurate they become, the more likely people are to overlook errors.
View Full Article

Comments (0)

Be the first to comment!