The author suggests that the value in AI will come from building platforms that treat models like processors, delivering performance improvements to developers. He is skeptical about the potential of achieving artificial general intelligence (AGI) through vertical integration, as Google is attempting, and believes the biggest benefits will come from horizontal reach at the API, model, and GPU layers. However, he acknowledges that Google could prove him wrong.
Key takeaways:
- The article discusses the different approaches taken by tech giants like Google, AWS, Microsoft, Nvidia, and Meta in the field of AI, with Google opting for a fully integrated model while others like AWS and Microsoft lean towards a more modular approach.
- Google's integrated approach, similar to Apple's business model, is unique and could potentially offer a competitive advantage. However, the success of this strategy depends on whether Google can maintain its product excellence and whether the benefits of integration are significant in the internet services market.
- Microsoft's strategy is a mix of integration and modularization, largely due to its partnership with OpenAI. However, the company's dependence on a partner it doesn't control poses significant risks.
- The rise of large language models (LLMs) could potentially disrupt Nvidia's dominance in the GPU market. However, Nvidia's continuous innovation and performance advantage make it difficult for competitors to catch up. Meta's open-source approach to its Llama model could also accrue benefits from widespread usage.