The stagnation in language model progress is not limited to OpenAI, with Google's Gemini 2.0 also falling short of targets and open-source models catching up to billion-dollar proprietary ones. Despite this, OpenAI CEO Sam Altman remains optimistic, suggesting that the path to artificial general intelligence (AGI) lies in creative use of existing models. The industry now faces the question of whether building ever-more-powerful AI models and the massive data centers they require is economically and environmentally viable.
Key takeaways:
- OpenAI's upcoming Orion model shows only minor improvements over its predecessor, GPT-4, indicating a slowdown in the development of language models.
- Insufficient high-quality training data is one of the reasons for this slowdown, leading OpenAI to use synthetic data generated by AI models for training.
- The stagnation in language model development is an industry-wide issue, with Google's Gemini 2.0 and Anthropic's Opus also falling short of targets.
- Despite the slowdown, OpenAI CEO Sam Altman remains optimistic, suggesting that the path to artificial general intelligence (AGI) lies in creative use of existing models and a shift in focus from training to inference.