Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Intel’s “Gaudi 3” AI accelerator chip may give Nvidia’s H100 a run for its money

Apr 12, 2024 - arstechnica.com
Intel has unveiled a new AI accelerator chip, Gaudi 3, at its Vision 2024 event, claiming 50% faster performance when running AI language models compared to Nvidia's H100. The chip is projected to deliver faster training time for OpenAI's GPT-3 175B LLM and Meta's Llama 2, and is being positioned as an alternative to Nvidia's H100, which has been facing supply issues. Intel's Gaudi 3 is also being seen as a potential alternative to custom AI-accelerator chip designs sought by companies like Microsoft, Meta, and OpenAI.

The Gaudi 3 chip builds on the architecture of its predecessor, Gaudi 2, featuring two identical silicon dies connected by a high-bandwidth connection, a central cache memory of 48 megabytes, four matrix multiplication engines, and 32 programmable tensor processor cores per die. Intel claims that Gaudi 3 delivers double the AI compute performance of Gaudi 2 and offers a fourfold boost for computations using the BFloat 16-number format. The chip also features 128GB of the less expensive HBMe2 memory capacity and emphasizes power efficiency, claiming 40% greater inference power-efficiency compared to Nvidia's H100.

Key takeaways:

  • Intel has unveiled a new AI accelerator chip, Gaudi 3, at its Vision 2024 event, claiming 50% faster performance when running AI language models compared to Nvidia's H100 chip.
  • The Gaudi 3 chip is built upon the architecture of its predecessor, Gaudi 2, and features two identical silicon dies connected by a high-bandwidth connection, with a total of 64 cores.
  • Intel is positioning Gaudi 3 as a potentially attractive alternative to the H100, especially given the latter's supply issues and high market share.
  • Intel's Gaudi 3 is being manufactured using TSMC's N5 process technology, which could help Intel compete with Nvidia in terms of semiconductor fabrication technology.
View Full Article

Comments (0)

Be the first to comment!