The MI300X is powered by ROCm 6.0 software stack, which supports various AI workloads and the latest compute formats. The GPU accelerator is designed on the CDNA 3 architecture and hosts a mix of 5nm and 6nm IPs, delivering up to 153 billion transistors. It also offers a significant upgrade in memory, boasting 50% more HBM3 capacity than its predecessor, the MI250X. Despite its powerful performance, AMD's MI300X is rated at 750W, only 50W more than the NVIDIA H200.
Key takeaways:
- AMD has launched its flagship AI GPU accelerator, the MI300X, which offers up to 60% better performance than NVIDIA's H100.
- The MI300X is equipped with advanced packaging technologies from TSMC and offers higher memory capacity, higher memory bandwidth, and faster performance in various comparisons with NVIDIA's H100.
- The MI300X is powered by the ROCm 6.0 software stack, which supports various AI workloads and offers significant speedups in various operations.
- Despite the competition from NVIDIA and Intel, AMD's MI300X is poised to be a leader in the AI segment, with support from companies like Oracle, Dell, META, and OpenAI.