Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI leaders are starting to rethink the best way to advance AI

Nov 24, 2024 - businessinsider.com
AI leaders are reconsidering the traditional data-heavy training for large language models, as the linear scaling approach may have limitations. Tech companies such as OpenAI, Meta, and Google have typically focused on collecting vast amounts of data, assuming more training material would result in smarter models. However, industry leaders are now exploring alternatives, with smaller, more efficient models and new training methods gaining traction.

Despite some executives advocating for this shift, others, such as Microsoft's CTO, believe AI has not yet hit a scaling wall. OpenAI's o1, a model that spends more time on inference before answering a question, is an example of attempts to improve existing large language models. However, this model requires more computational power, making it slower and more expensive.

Key takeaways:

  • AI leaders are reconsidering the traditional approach of using large amounts of data to train large language models, as this method may have limitations.
  • Smaller, more efficient models and new training methods are gaining support in the industry, with some advocating for models that translate questions into computer code to generate answers.
  • Despite concerns, some industry leaders, like Microsoft's CTO, believe that AI has not yet hit a scaling wall and that there are still benefits to be gained from scaling up.
  • OpenAI's new model, o1, is designed to better handle quantitative questions and spends more time on inference before answering a question, but it requires more computational power, making it slower and more expensive.
View Full Article

Comments (0)

Be the first to comment!