What's the Difference Between CXL 1.1 and CXL 2.0?
By Elad Shliselberg, Ronen Hyatt (UnifabriX Ltd.)
ElectronicDesign (August 24, 2022)
Compute Express Link (CXL) is a cache-coherent interconnect, designed to be an industry-open standard interface between platform functions such as processors, accelerators, and memory.
CXL 1.1 is the first productized version of CXL. It brings forward a world of possibilities and opportunities to improve upon the many strong features that exist in the PCI Express (PCIe) arsenal. The specification introduces the concept of memory expansion, coherent co-processing via accelerator cache, and device-host memory sharing.
The rich set of CXL semantics goes much beyond the familiar cxl.io (PCIe with enhancements) to also offer cxl.cache, and cxl.mem. These semantics are groups into Device Types: 1 (cxl.io/cxl.cache), 2 (cxl.io/cxl.cache/cxl.mem) and 3 (cxl.io/cxl.mem).
Given the disruptive nature of CXL, its true value and potential ecosystem of applications are yet to be realized once it’s deployed at scale. As the standard evolves, CXL 2.0 builds upon CXL 1.1 and uncovers new opportunities to further strengthen the robustness and scalability of the technology, while being fully backward compatible with CXL 1.1.
In this article, we’ll explore the fundamental capabilities of CXL and highlight the primary differences between CXL 2.0 vs. CXL 1.1, as well as the enhancements made as the protocol natively evolves.
To read the full article, click here
Related Semiconductor IP
- VIP for Compute Express Link (CXL)
- CXL 3.0 Controller
- CXL Controller IP
- CXL memory expansion
- CXL 4.0/3.2/3/2 Verification IP
Related Articles
- CXL Topology-Aware and Expander-Driven Prefetching: Unlocking SSD Performance
- cMPI: Using CXL Memory Sharing for MPI One-Sided and Two-Sided Inter-Node Communications
- The Hitchhiker's Guide to Programming and Optimizing CXL-Based Heterogeneous Systems
- Understanding Timing Correlation Between Sign-off Tool and Circuit Simulation
Latest Articles
- Making Strong Error-Correcting Codes Work Effectively for HBM in AI Inference
- Sensitivity-Aware Mixed-Precision Quantization for ReRAM-based Computing-in-Memory
- ElfCore: A 28nm Neural Processor Enabling Dynamic Structured Sparse Training and Online Self-Supervised Learning with Activity-Dependent Weight Update
- A 14ns-Latency 9Gb/s 0.44mm² 62pJ/b Short-Blocklength LDPC Decoder ASIC in 22FDX
- Pipeline Stage Resolved Timing Characterization of FPGA and ASIC Implementations of a RISC V Processor