Consortium Releases CXL 3.0 CPU Interconnect Based on PCIe 6.0

The CXL Consortium has released version 3.0 of its Compute Express Link interconnect. The group released the specification of this interconnect standard on Tuesday. CXL makes it possible to connect CPUs and accelerators, among other things.

CXL 3.0 will receive support, among other things for the PCIe 6.0 standard, which offers a maximum bandwidth of 64GT/s, which amounts to 128GB/s for an x16 interface. That’s double the previous versions of CXL, which were all based on PCIe 5.0. This has been achieved by transitioning PCIe to pam4 signaling. CXL 3.0 also uses a new flit format with a higher packet size of 256 bytes to lower latency, just like PCIe 6.0 itself. According to the consortium, the latency has not increased compared to CXL 2.0 as a result.

Apart from the higher bandwidth, CXL 3.0 also gets better peer-to-peer connectivity between devices. From now on, devices can access each other’s memory directly, without the intervention of a host. Improved memory pooling also makes it possible for several hosts to access a memory pool at the same time. It is also possible to connect several switches with each other. Previously, it was only possible to place a single switch between a host and its devices.

CXL is an open standard based on PCIe and functions as an interconnect between CPUs, GPUs, FPGAs, network devices, memory such as DDR4 and DDR5, and storage. In addition, the standard cache enables coherency between those devices. CXL is primarily intended for data centers and is also suitable for connecting multiple servers. For example, CXL 3.0 supports up to 4096 different nodes.

The CXL Consortium is supported by major chip companies such as AMD, Arm, Intel, Micron, Nvidia and Samsung. For example, AMD’s upcoming Epyc server chips and Intel’s Sapphire Rapids processors will both receive support for the CXL 1.1 specification. Tech companies such as Google, Meta and Microsoft also support the standard.