Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

LLMs confabulate not hallucinate

Oct 05, 2023 - beren.io
The article discusses the incorrect use of the term 'hallucinating' to describe the behavior of Language Learning Models (LLMs) when they generate false information in response to a query. The author suggests that the term 'confabulation', used in psychology, is more accurate. Confabulation refers to the creation of plausible but false explanations or stories, often seen in patients with brain damage or memory issues who cannot admit they do not know the answer to a question. This behavior mirrors that of LLMs, which generate plausible but potentially false information when they cannot provide an accurate response.

The author further emphasizes that recognizing this behavior in LLMs as confabulation can help compare and contrast their behavior with humans. Humans confabulate in various circumstances, especially in cases of memory impairments or in split-brain patients. The author humorously suggests that LLMs are akin to humans with extreme amnesia and lack of central coherence.

Key takeaways:

  • The term 'hallucinating' is often incorrectly used to describe when the LLM makes up information for a given query, even if it's false. The correct term for this phenomenon is 'confabulation'.
  • Confabulation is a term used in psychiatry when people with brain damage, especially memory damage, invent plausible sounding justifications with no basis in fact, instead of admitting they don't know the answer.
  • This behaviour is identical to what LLMs do. When they are forced to give an answer using a fact they do not know, they cannot say that they don’t know, so they make up something plausible.
  • Recognizing that what LLMs are really doing is confabulating can help us compare and contrast their behaviour with that of humans, especially those with memory impairments and in split-brain patients.
View Full Article

Comments (0)

Be the first to comment!