In response to Nvidia's Blackwell parts, AMD is moving to a yearly release cadence for new Instinct accelerators. The next-gen CDNA 4 compute architecture will stick to the same 288GB of HBM3e config as MI325X, but move to a 3nm process node for the compute tiles, and add support for FP4 and FP6 data types. AMD's future plans also hint at significant architectural upgrades, potentially involving heterogeneous multi-die deployments or photonic memory expansion.
Key takeaways:
- AMD's flagship AI accelerator, the Instinct MI325X, will receive a high-bandwidth boost when it arrives later this year, with a memory capacity of 288GB, more than twice that of Nvidia's H200.
- The MI325X's memory bandwidth will increase to 6TB/sec, a boost from the 5.3TB/sec of the MI300X and 1.3x more than the H200.
- AMD is moving to a yearly release cadence for new Instinct accelerators to better compete with Nvidia's Blackwell parts.
- AMD's next-gen CDNA 4 compute architecture will stick to the same 288GB of HBM3e config as MI325X, but move to a 3nm process node for the compute tiles, and add support for FP4 and FP6 data types.