The new Stable LM 2 models are trained on more data, including multilingual documents in six languages in addition to English. Stability AI is also providing the new models in pre-trained and fine-tuned options, as well as a format described as the last model checkpoint before the pre-training cooldown. This is aimed at providing more tools for developers to innovate and build on top of the current model.
Key takeaways:
- Stability AI has released one of its smallest models yet, the Stable LM 2 1.6B, a text content generation large language model (LLM).
- The new model aims to lower barriers and enable more developers to participate in the generative AI ecosystem, incorporating multilingual data in seven languages.
- Despite its size, the Stable LM 2 1.6B outperforms other small language models with under 2 billion parameters on most benchmarks, and even surpasses some larger models.
- Stability AI is making the new models available with pre-trained and fine-tuned options, and a format that allows developers to further specialize the model for other tasks or datasets.