Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits

Feb 28, 2024 - news.bensbites.co
The article introduces BitNet b1.58, a new variant of 1-bit Large Language Models (LLMs) where every parameter of the LLM is ternary {-1, 0, 1}. This model matches the full-precision Transformer LLM in terms of perplexity and end-task performance, while being more cost-effective in terms of latency, memory, throughput, and energy consumption.

The 1.58-bit LLM not only defines a new scaling law and recipe for training high-performance and cost-effective LLMs, but also enables a new computation paradigm. This opens up opportunities for designing specific hardware optimized for 1-bit LLMs.

Key takeaways:

  • The study introduces a new 1-bit Large Language Model (LLM) variant called BitNet b1.58, where every parameter of the LLM is ternary {-1, 0, 1}.
  • BitNet b1.58 matches the full-precision Transformer LLM in terms of perplexity and end-task performance, while being more cost-effective in terms of latency, memory, throughput, and energy consumption.
  • The 1.58-bit LLM defines a new scaling law and recipe for training new generations of LLMs that are both high-performance and cost-effective.
  • The new model enables a new computation paradigm and opens the door for designing specific hardware optimized for 1-bit LLMs.
View Full Article

Comments (0)

Be the first to comment!