Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

jartine/Meta-Llama-3-70B-Instruct-llamafile · Hugging Face

Apr 20, 2024 - huggingface.co
The markdown data is about Meta's Llama 3, a large language model (LLM) that comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. The models, which only input and output text, use an optimized transformer architecture and are designed for dialogue use cases. They run on Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64. The Llama 3 models were developed using a mix of publicly available online data and do not include Meta user data. The models were released on April 18, 2024, and are intended for commercial and research use in English.

The Llama 3 models were trained using Meta's Research SuperCluster and production clusters, with the pretraining process utilizing a cumulative 7.7M GPU hours of computation. The models were also evaluated on standard automatic benchmarks using Meta's internal evaluations library. The markdown data also provides instructions on how to use the models with transformers and the original `llama3` codebase. Developers are advised to perform safety testing and tuning tailored to their specific applications of the model before deployment.

Key takeaways:

  • The Meta Llama 3 is a large language model developed by Meta, optimized for dialogue use cases and outperforms many of the available open source chat models.
  • Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants, and can run on multiple operating systems.
  • The model was trained on over 15 trillion tokens of data from publicly available sources and does not include Meta user data.
  • Meta has taken steps to limit misuse and harm, and supports the open source community. Developers are encouraged to perform safety testing and tuning tailored to their specific applications of the model.
View Full Article

Comments (0)

Be the first to comment!