Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI's new "Orion" model reportedly shows small gains over GPT-4

Nov 11, 2024 - the-decoder.com
OpenAI's upcoming language model, Orion, is reportedly underperforming, showing only minor improvements over its predecessor, GPT-4. The slowdown in language model development is affecting the entire AI industry, with insufficient high-quality training data cited as a key reason. In response, OpenAI is shifting focus to learning more from less data and using synthetic data generated by AI models. However, this approach could risk the new model simply resembling older ones.

The stagnation in language model progress is not limited to OpenAI, with Google's Gemini 2.0 also falling short of targets and open-source models catching up to billion-dollar proprietary ones. Despite this, OpenAI CEO Sam Altman remains optimistic, suggesting that the path to artificial general intelligence (AGI) lies in creative use of existing models. The industry now faces the question of whether building ever-more-powerful AI models and the massive data centers they require is economically and environmentally viable.

Key takeaways:

  • OpenAI's upcoming Orion model shows only minor improvements over its predecessor, GPT-4, indicating a slowdown in the development of language models.
  • Insufficient high-quality training data is one of the reasons for this slowdown, leading OpenAI to use synthetic data generated by AI models for training.
  • The stagnation in language model development is an industry-wide issue, with Google's Gemini 2.0 and Anthropic's Opus also falling short of targets.
  • Despite the slowdown, OpenAI CEO Sam Altman remains optimistic, suggesting that the path to artificial general intelligence (AGI) lies in creative use of existing models and a shift in focus from training to inference.
View Full Article

Comments (0)

Be the first to comment!