1
Feature Story
A Tutorial on LLM - Haifeng Li - Medium
Sep 15, 2023 · medium.comThe author concludes by emphasizing that while LLMs are an exciting area with potential for rapid innovation, they learn language differently from humans and lack access to the social and perceptual context that human language learners use. The author suggests these differences could be areas for future improvement or the development of new learning algorithms.
Key takeaways
- Generative artificial intelligence (GenAI), especially ChatGPT, has the ability to generalize to many different tasks due to its training on a vast quantity of unlabeled data.
- Language models like GPT4 demonstrate some sort of thinking capability, despite the limitation of this formulation to reach artificial general intelligence (AGI).
- Increasing the capacity of the language model improves performance in a log-linear fashion across tasks, as shown by GPT2 and GPT3.
- Reinforcement Learning from Human Feedback (RLHF) and Instruction Fine-Tuning are techniques used to align language models with user intent and to specify which task the model should perform, respectively.