AMD has announced its MI300 AI accelerator, while Nvidia's partners, including Microsoft, OpenAI, Amazon, and Meta, are trying to reduce their dependence on Nvidia's limited supply. Intel has unveiled its Core Ultra CPUs with silicon dedicated to AI, and AMD has released new chips for faster AI training. Microsoft's AI-focused chip Maia 100 is expected to arrive this year, and Amazon has announced the latest version of its Trainium chip. Meta is also working on a custom chip for AI models.
Key takeaways:
- The rise of generative AI has led to a high demand for Nvidia's H100 GPU, prompting tech giants like Microsoft, Meta, OpenAI, Amazon, and Google to start developing their own AI processors.
- Nvidia, AMD, and Intel are in a race to release more efficient and powerful AI chips, with AMD's MI300 AI accelerator reportedly being the fastest revenue ramp of any product in their history.
- Intel has unveiled its next generation of CPUs, the Core Ultra, which promises better power efficiency and performance, while AMD has released new accelerators and processors for running large language models.
- Microsoft, Meta, and other tech companies are working on their own custom AI chips to reduce their reliance on Nvidia's limited supply and to power their AI projects.