The article further criticizes the claim of CXL sales reaching up to $15 billion by 2028 as "outright ridiculous". It suggests that proprietary protocols such as Nvidia NVlink and Google ICI, or Ethernet and Infiniband will be the main interconnects for AI Clusters due to their superior bandwidth and speed. The article concludes by suggesting that AMD needs to move off PCIe-style SerDes for their AI accelerator to compete with Nvidia’s B100 and that CXL is not the correct protocol for AI.
Key takeaways:
- The CXL (Compute Express Link) technology, once hailed as a game-changer for datacenter hardware, has seen a decline in interest from hyperscalers and large semiconductor companies.
- Despite the decline, there is still ongoing research and discussion around CXL, with some professionals pushing it as an 'enabler' for AI.
- However, the article argues that CXL is not suitable for AI due to issues related to PCIe SerDes and beachfront or shoreline area.
- The article suggests that proprietary protocols such as Nvidia NVlink and Google ICI, or Ethernet and Infiniband, will be the main scale up and scale out interconnects for AI Clusters due to their superior bandwidth and speed.