The 1.58-bit LLM not only defines a new scaling law and recipe for training high-performance and cost-effective LLMs, but also enables a new computation paradigm. This opens up opportunities for designing specific hardware optimized for 1-bit LLMs.
Key takeaways:
- The study introduces a new 1-bit Large Language Model (LLM) variant called BitNet b1.58, where every parameter of the LLM is ternary {-1, 0, 1}.
- BitNet b1.58 matches the full-precision Transformer LLM in terms of perplexity and end-task performance, while being more cost-effective in terms of latency, memory, throughput, and energy consumption.
- The 1.58-bit LLM defines a new scaling law and recipe for training new generations of LLMs that are both high-performance and cost-effective.
- The new model enables a new computation paradigm and opens the door for designing specific hardware optimized for 1-bit LLMs.