Nvidia's top submission for the inference benchmark was built around eight of its flagship H100 chips. Despite dominating the AI model training market, Nvidia has yet to capture the inference market. Intel's success was based on its Gaudi2 chips, produced by the Habana unit it acquired in 2019, which were roughly 10% slower than Nvidia's system. Both companies declined to discuss the cost of their chips.
Key takeaways:
- An artificial intelligence benchmark group, MLCommons, has unveiled the results of new tests that determine how quickly top-of-the-line hardware can run AI models.
- Nvidia Corp's chip was the top performer in tests on a large language model, with a semiconductor produced by Intel Corp a close second.
- The new MLPerf benchmark is based on a large language model with 6 billion parameters that summarizes CNN news articles.
- Intel's success is based around its Gaudi2 chips produced by the Habana unit the company acquired in 2019. The Gaudi2 system was roughly 10% slower than Nvidia's system.