Llama 3 is expected to be twice as big as its predecessor, Llama 2, with more than 140 billion parameters, compared to Llama 2's 70 billion. Despite this increase, Llama 3’s parameter count is still significantly smaller than the original GPT-4 Mixture-of-Experts model, which has 1.76 trillion parameters. The decision on whether Llama 3 will remain solely a language model or expand into the multimodal territory, understanding and generating using images and videos, is yet to be made. The number of parameters is no longer seen as the sole or most crucial indicator of an AI model’s potential output quality.
Key takeaways:
- Social media giant Meta is developing Llama 3, an open-source AI model expected to rival OpenAI’s GPT 4, with a planned release in July this year.
- The new model aims to offer enhanced user interaction by providing contextual insights on complex subjects, marking a shift from the more conservative approach of its predecessor, Llama 2.
- Despite the development efforts, three prominent AI safety experts have left Meta recently, indicating potential challenges in the pursuit of safer AI technologies.
- Llama 3 is expected to be twice as big as Llama 2, with over 140 billion parameters, but this is still a small fraction compared to the original GPT-4 Mixture-of-Experts model with 1.76 trillion parameters.