Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

GitHub - google-deepmind/recurrentgemma: Open weights language model from Google DeepMind, based on Griffin.

Apr 10, 2024 - github.com
RecurrentGemma is a family of open-weights Language Models developed by Google DeepMind, based on the Griffin architecture. This architecture allows for fast inference when generating long sequences by combining local attention and linear recurrences. The model implementation and examples for sampling and fine-tuning are available in the repository. The Flax implementation is recommended for most users due to its optimization, but an un-optimized PyTorch implementation is also provided for reference.

The model can be installed using Poetry or pip, with specific installation instructions provided for different library-specific packages. Model checkpoints can be downloaded from Kaggle and the repository also includes unit tests and example scripts. Tutorials are available in the form of Colab notebooks. RecurrentGemma can run on CPU, GPU or TPU, with optimization for TPU using the Flax implementation. The code is licensed under the Apache License, Version 2.0 and is not an official Google product.

Key takeaways:

  • RecurrentGemma is a family of open-weights Language Models by Google DeepMind, based on the Griffin architecture, which achieves fast inference when generating long sequences.
  • The model implementation and examples for sampling and fine-tuning are available in this repository, with recommended use of the Flax implementation for optimization.
  • RecurrentGemma can be installed using either Poetry or pip, with specific instructions provided for installing dependencies for the full project or for specific libraries.
  • Model checkpoints are available through Kaggle, and there are Colab notebook tutorials available for sampling and fine-tuning using either JAX or PyTorch.
View Full Article

Comments (0)

Be the first to comment!