Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

LLM as an n-dimensional Object in n-dimensional Space

Apr 30, 2024 - medium.com
The article discusses the concept of language models, either symbolic or neural network-based, as n-dimensional objects in an n-dimensional space. These objects are initially formed using training sequences or datasets, and the "understanding" of a prompt is seen as an approximation of the surfaces of these objects. The accuracy of this approximation can vary, sometimes requiring transformations for a correct approximation. Fine-tuning and retraining are seen as modifications of the surface.

The predictive capability of a language model is determined by the shape of the object's surface, while its generative abilities are based on the ability to move along the surface from any point to other points in any permissible direction. The selection of these points should not be solely determined by the language model, but by a generalisation derived from it. The article questions if a language model alone is sufficient for cognition and reasoning, concluding mathematically, it is not.

Key takeaways:

  • Language models, whether symbolic or neural network-based, can be conceptualized as n-dimensional objects in an n-dimensional space, formed using training sequences.
  • The "understanding" of a prompt can be seen as an approximation of the surfaces of this n-dimensional object, which may require transformations for correct approximation.
  • Predictive capability is ensured by the shape of the surface of the object, while generative abilities are provided by the ability to move along the surface in any permissible direction.
  • The selection of points on the surface should not be determined solely by the language model, but by some generalization derived from it, raising questions about the adequacy of a language model alone for cognition and reasoning.
View Full Article

Comments (0)

Be the first to comment!