The release of Grok-1 as open-source is a significant move, contrasting with OpenAI’s proprietary-only offerings. However, it's not the same as the model used for the Grok AI assistant, meaning X.ai now has both open-source and proprietary models. This development could potentially encourage other companies in the AI space to support open-source developers.
Key takeaways:
- X.ai has announced that the Grok 1 large language model (LLM) is now available under the open-source Apache 2.0 license, allowing users royalty-free access to the source code for both commercial and private uses.
- The Grok-1 model is a pre-training model, not fine-tuned for any specific application, and is a 314 billion-parameter model, larger than GPT-3/3.5 but likely smaller than GPT-4.
- Grok-1's architecture is based on a Mixture-of-experts (MoE) design, which is considered a more efficient method to scale to higher performance than loading up parameter counts.
- Despite the open-source release of Grok-1, X.ai maintains both proprietary and open-source models, a move that contrasts with the proprietary-only offerings of OpenAI.