To cater to diverse applications, the offerings will range from single to multi-GPUs, x86- to Grace-based processors, and air- to liquid-cooling technology. Nvidia's MGX modular reference design platform now supports Blackwell products, including the new Nvidia GB200 NVL2 platform, designed for large language model inference, retrieval-augmented generation, and data processing. The Blackwell platform also includes Nvidia Blackwell Tensor Core GPUs, GB200 Grace Blackwell Superchips, and the GB200 NVL72. Nvidia's partner ecosystem includes TSMC, the world's leading semiconductor manufacturer, and global electronics makers, which provide key components to create AI factories.
Key takeaways:
- Nvidia CEO Jensen Huang announced the unveiling of Nvidia Blackwell architecture-powered systems featuring Grace CPUs, Nvidia networking and infrastructure for AI factories and data centers.
- The Nvidia Blackwell GPUs promise 25 times better energy consumption and lower costs for AI processing tasks, with the Nvidia GB200 Grace Blackwell Superchip providing up to 30 times performance increase for LLM inference workloads.
- Nvidia's MGX modular reference design platform now supports Blackwell products, which can help slash development costs by up to three-quarters and reduce development time by two-thirds, to just six months.
- Taiwan’s leading companies, including Chang Gung Memorial Hospital and Foxconn, are rapidly adopting Blackwell to advance biomedical research, accelerate imaging and language applications, and develop smart solution platforms for AI-powered electric vehicle and robotics platforms.