The article also highlights the benefits of running AI models locally on devices such as smartphones and laptops. This approach reduces latency and potential outages, ensures data privacy, and could enable new use cases for AI, such as deep integration into a device's operating system. The author suggests that Apple might reveal a similar strategy at its upcoming WWDC conference, focusing on shrinking AI to fit into its customers' pockets rather than building larger cloud AI models like OpenAI and Google.
Key takeaways:
- Microsoft has developed a family of smaller AI models, including Phi-3-mini, which can run on devices as small as a smartphone, demonstrating significant advancements in AI efficiency.
- The Phi-3-mini model performs comparably to GPT-3.5, the model behind the first release of ChatGPT, according to Microsoft researchers.
- Microsoft's approach involves being more selective about the training data for AI systems, which has shown to improve their abilities without increasing their size.
- Running AI models locally on devices like smartphones and laptops could reduce latency, ensure data privacy, and open up new use cases for AI, moving away from the cloud-centric model.