Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Researchers Develop a More Efficient Way to Fine-Tune Large Language Models for Long Text Sequences - SuperAGI News

Sep 22, 2023 - news.bensbites.co
Researchers from The Chinese University of Hong Kong and MIT have developed LongLoRA, a new fine-tuning approach to extend the context sizes of large language models (LLMs) efficiently. The approach uses a dual-strategy, introducing a new attention mechanism, Shift Short Attention (S2-Attn), for efficient information sharing during training, and improving an existing low-rank adaptation technique, LoRA, by making the embedding and normalization layers trainable. This approach allows the model to process longer contexts efficiently without requiring extra computational power.

LongLoRA's efficiency is notable as it can be implemented in two lines of code during training and requires no changes during the inference stage. It can fine-tune a model with up to 100,000 tokens of context on a single 8× A100 machine, a feat previously considered computationally prohibitive. The researchers have also released a dataset, LongQA, with over 3,000 long context question-answer pairs to improve LLMs' conversational abilities. The team believes LongLoRA can be compatible with various types of LLMs and position encodings, potentially revolutionizing applications requiring the understanding of extended text sequences.

Key takeaways:

  • Researchers from The Chinese University of Hong Kong and MIT have developed LongLoRA, a new fine-tuning approach designed to extend the context sizes of large language models (LLMs) efficiently.
  • LongLoRA introduces a dual-strategy approach, including a new attention mechanism called Shift Short Attention (S2-Attn) and an improved low-rank adaptation technique known as LoRA.
  • LongLoRA can be implemented in just two lines of code during the training phase, requires no changes during the inference stage, and is compatible with existing techniques.
  • The team has released a dataset called LongQA, featuring more than 3,000 long context question-answer pairs, and made the full research paper, code, and dataset publicly available.
View Full Article

Comments (0)

Be the first to comment!