Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

New AI Reasoning Model Rivaling OpenAI Trained on Less Than $50 in Compute

Feb 06, 2025 - gizmodo.com
The article discusses the commodification of AI language models, highlighting the emergence of open-source models like DeepSeek and S1. S1, developed by researchers at Stanford and the University of Washington, was trained with minimal resources, using less than $50 in cloud compute credits. It competes with OpenAI's o1 by employing a reasoning process that mimics Google's Gemini 2.0 model. The researchers enhanced S1's reasoning by instructing it to "wait," improving accuracy. This development underscores the potential for significant advancements in AI through simple techniques, despite concerns about the limitations of current models.

The article also addresses the ethical and legal challenges surrounding AI training data, noting OpenAI's criticism of DeepSeek's practices and ongoing litigation over data usage. While S1's performance is impressive, it relies heavily on existing models like Gemini, akin to a compressed version of a larger model. The rise of open-source models raises questions about the future of companies like OpenAI, but defenders argue that success will come from building applications on top of these models. The article concludes by noting that the demand for computing resources will likely increase as AI becomes more integrated into daily life, suggesting that investments in infrastructure, like OpenAI's server farm, will remain valuable.

Key takeaways:

  • AI language models are becoming commodified, with open-source offerings like DeepSeek and new entrants like S1 demonstrating that they can be developed on a relatively small budget.
  • S1, a reasoning model trained with less than $50 in cloud compute credits, competes with OpenAI's o1 by mimicking the thinking process of Google's Gemini 2.0 Flashing Thinking Experimental model.
  • The rise of cheap, open-source models raises questions about the future of companies like OpenAI, but defenders argue that success will come from building useful applications on top of these models.
  • Despite the commodification of AI models, inference remains expensive, and the demand for computing resources is expected to increase as AI becomes more integrated into everyday life.
View Full Article

Comments (0)

Be the first to comment!