Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

CXL Is Dead In The AI Era

Mar 16, 2024 - semianalysis.com
The article discusses the potential and challenges of the Compute Express Link (CXL) technology in the datacenter hardware world. Despite the initial hype and promise of enabling heterogenous compute, memory pooling, and composable server architectures, many projects have been shelved and large companies have pivoted away from CXL. The article argues that the push for CXL as an 'enabler' of AI is misguided due to issues related to PCIe SerDes and beachfront or shoreline area. It also highlights the limitations of CXL in terms of availability and compatibility with existing GPUs.

The article further criticizes the claim of CXL sales reaching up to $15 billion by 2028 as "outright ridiculous". It suggests that proprietary protocols such as Nvidia NVlink and Google ICI, or Ethernet and Infiniband will be the main interconnects for AI Clusters due to their superior bandwidth and speed. The article concludes by suggesting that AMD needs to move off PCIe-style SerDes for their AI accelerator to compete with Nvidia’s B100 and that CXL is not the correct protocol for AI.

Key takeaways:

  • The CXL (Compute Express Link) technology, once hailed as a game-changer for datacenter hardware, has seen a decline in interest from hyperscalers and large semiconductor companies.
  • Despite the decline, there is still ongoing research and discussion around CXL, with some professionals pushing it as an 'enabler' for AI.
  • However, the article argues that CXL is not suitable for AI due to issues related to PCIe SerDes and beachfront or shoreline area.
  • The article suggests that proprietary protocols such as Nvidia NVlink and Google ICI, or Ethernet and Infiniband, will be the main scale up and scale out interconnects for AI Clusters due to their superior bandwidth and speed.
View Full Article

Comments (0)

Be the first to comment!