The author further emphasizes that recognizing this behavior in LLMs as confabulation can help compare and contrast their behavior with humans. Humans confabulate in various circumstances, especially in cases of memory impairments or in split-brain patients. The author humorously suggests that LLMs are akin to humans with extreme amnesia and lack of central coherence.
Key takeaways:
- The term 'hallucinating' is often incorrectly used to describe when the LLM makes up information for a given query, even if it's false. The correct term for this phenomenon is 'confabulation'.
- Confabulation is a term used in psychiatry when people with brain damage, especially memory damage, invent plausible sounding justifications with no basis in fact, instead of admitting they don't know the answer.
- This behaviour is identical to what LLMs do. When they are forced to give an answer using a fact they do not know, they cannot say that they don’t know, so they make up something plausible.
- Recognizing that what LLMs are really doing is confabulating can help us compare and contrast their behaviour with that of humans, especially those with memory impairments and in split-brain patients.