Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Meta’s Llama 3 expected to be 2x bigger, set to launch in July

Mar 04, 2024 - aibeat.co
Social media giant Meta is preparing to release its next open-source AI model, Llama 3, in July this year. The new model is expected to rival OpenAI’s GPT 4 and will offer enhanced user interaction by providing contextual insights on complex subjects. Meta is reportedly improving the model's ability to understand the context of words with both benign and potentially harmful meanings. The company is also considering appointing a dedicated overseer for the model’s tone and safety protocols. However, the recent departure of three AI safety experts from Meta suggests potential challenges in the development of safer AI technologies.

Llama 3 is expected to be twice as big as its predecessor, Llama 2, with more than 140 billion parameters, compared to Llama 2's 70 billion. Despite this increase, Llama 3’s parameter count is still significantly smaller than the original GPT-4 Mixture-of-Experts model, which has 1.76 trillion parameters. The decision on whether Llama 3 will remain solely a language model or expand into the multimodal territory, understanding and generating using images and videos, is yet to be made. The number of parameters is no longer seen as the sole or most crucial indicator of an AI model’s potential output quality.

Key takeaways:

  • Social media giant Meta is developing Llama 3, an open-source AI model expected to rival OpenAI’s GPT 4, with a planned release in July this year.
  • The new model aims to offer enhanced user interaction by providing contextual insights on complex subjects, marking a shift from the more conservative approach of its predecessor, Llama 2.
  • Despite the development efforts, three prominent AI safety experts have left Meta recently, indicating potential challenges in the pursuit of safer AI technologies.
  • Llama 3 is expected to be twice as big as Llama 2, with over 140 billion parameters, but this is still a small fraction compared to the original GPT-4 Mixture-of-Experts model with 1.76 trillion parameters.
View Full Article

Comments (0)

Be the first to comment!