Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

What if AI doesn’t just keep getting better forever?

Nov 12, 2024 - arstechnica.com
The optimism about the exponential growth of AI capabilities is being replaced by concerns of a plateau in the performance of large language models (LLMs) trained with standard methods. OpenAI insiders have reported that their upcoming model, codenamed Orion, is showing a smaller performance jump than previous models, with some tasks not showing any improvement. OpenAI co-founder Ilya Sutskever added to these concerns, stating that the era of scaling, where additional computing resources and training data led to significant improvements, is over and the focus is now on finding the next big thing.

A significant part of the problem is the lack of new, quality textual data for training new LLMs. Experts suggest that model makers may have already exhausted the most easily accessible data from the public Internet and published books. The challenge now is to find new and effective ways to scale AI models beyond the current plateau.

Key takeaways:

  • AI industry watchers are concerned that the capabilities of large language models (LLMs) may be hitting a plateau, with OpenAI's upcoming model, Orion, showing a smaller performance jump than previous models.
  • Unnamed OpenAI researchers have reported that Orion is not reliably better than its predecessor on certain tasks.
  • OpenAI co-founder Ilya Sutskever has suggested that the era of scaling, where additional computing resources and training data could lead to improvements, may be over, and the focus should now be on finding the 'next thing'.
  • Experts and insiders believe a significant part of the training problem for new LLMs is a lack of new, quality textual data to train on, with the easiest-to-access data from the public Internet and published books potentially already exhausted.
View Full Article

Comments (0)

Be the first to comment!