Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Researchers Discover Emergent Linear Structures in How LLMs Represent Truth

Oct 13, 2023 - news.bensbites.co
Researchers from MIT and Northeastern University have provided evidence that large language models (LLMs) may contain a specific "truth direction" denoting factual truth values. The study used diverse datasets of simple true/false factual statements and found clear linear separation between true and false examples in LLM representations. Probes trained on one dataset could accurately classify the truth of totally distinct datasets, suggesting a general notion of truth. The researchers also manipulated LLM internal representations, showing that adding the "truth vector" identified by a probe to the LLM's processing made it assess false statements as true, and vice versa.

The research indicates that deep learning models like LLMs can linearly represent factual truth in their internal learned representations. This "truth vector" could potentially be used to filter out false statements before they are output by LLMs. However, the study focused on simple factual statements, and more complex truths may be harder to capture. The methods may not work as well for cutting-edge LLMs with different architectures, and more work is needed to extract "truth thresholds" for firm true/false classifications.

Key takeaways:

  • Researchers have found evidence that large language models (LLMs) may contain a specific 'truth direction' denoting factual truth values in their internal representations.
  • Probes trained on one dataset were able to accurately classify the truth of totally distinct datasets, suggesting a general notion of truth is identified, not just patterns coincidentally correlated with truth in narrow datasets.
  • Through experimentation, researchers were able to manipulate LLM internal representations to flip the assessed truth value of statements, providing strong 'causal' evidence for a truth direction.
  • Despite these findings, limitations remain, such as the methods not working as well for cutting-edge LLMs with different architectures, and the difficulty of capturing complex truths involving ambiguity, controversy, or nuance.
View Full Article

Comments (0)

Be the first to comment!