Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Infinite context windows? LLMs can be extended to infinite sequence lengths without fine-tuning

Oct 03, 2023 - notes.aimodels.fyi
Researchers from MIT, Meta AI, and Carnegie Mellon have developed StreamingLLM, a framework that allows large language models (LLMs) to process infinite-length language modeling without expensive fine-tuning. The method uses the LLMs' tendency to use initial tokens as "attention sinks" to anchor the distribution of attention scores. By caching initial tokens with recent ones, StreamingLLM restored perplexity and achieved up to 22x faster decoding than previous techniques. This development could expand the use of LLMs in areas like assistive AI, tutoring systems, and long-form document generation.

However, there are challenges in deploying LLMs for streaming, including memory overhead and performance decline. A technique called windowed attention was used to mitigate memory overhead, but it led to a drastic drop in the model's predictive performance. StreamingLLM addresses this by maintaining a small cache containing initial "sink" tokens along with the most recent tokens. This allows LLMs to handle context lengths exceeding 4 million tokens, a 1000x increase over their training corpus. Despite these advancements, concerns around bias, transparency, and responsible AI remain when deploying such powerful models.

Key takeaways:

  • Researchers from MIT, Meta AI, and Carnegie Mellon have proposed StreamingLLM, a framework that enables large language models (LLMs) to handle infinite-length language modeling without expensive fine-tuning.
  • StreamingLLM overcomes the limitations of LLMs by maintaining a small cache containing initial 'sink' tokens and the most recent tokens, allowing the models to handle context lengths exceeding 4 million tokens.
  • StreamingLLM achieved up to 22x lower latency than prior approaches while retaining comparable perplexity, potentially enabling practical streaming deployment of LLMs in interactive AI systems.
  • Despite the technical advancements, concerns around bias, transparency, and responsible AI remain when deploying such powerful models interacting with humans.
View Full Article

Comments (0)

Be the first to comment!