What's the Difference Between CXL 1.1 and CXL 2.0?
By Elad Shliselberg, Ronen Hyatt (UnifabriX Ltd.)
ElectronicDesign (August 24, 2022)
Compute Express Link (CXL) is a cache-coherent interconnect, designed to be an industry-open standard interface between platform functions such as processors, accelerators, and memory.
CXL 1.1 is the first productized version of CXL. It brings forward a world of possibilities and opportunities to improve upon the many strong features that exist in the PCI Express (PCIe) arsenal. The specification introduces the concept of memory expansion, coherent co-processing via accelerator cache, and device-host memory sharing.
The rich set of CXL semantics goes much beyond the familiar cxl.io (PCIe with enhancements) to also offer cxl.cache, and cxl.mem. These semantics are groups into Device Types: 1 (cxl.io/cxl.cache), 2 (cxl.io/cxl.cache/cxl.mem) and 3 (cxl.io/cxl.mem).
Given the disruptive nature of CXL, its true value and potential ecosystem of applications are yet to be realized once it’s deployed at scale. As the standard evolves, CXL 2.0 builds upon CXL 1.1 and uncovers new opportunities to further strengthen the robustness and scalability of the technology, while being fully backward compatible with CXL 1.1.
In this article, we’ll explore the fundamental capabilities of CXL and highlight the primary differences between CXL 2.0 vs. CXL 1.1, as well as the enhancements made as the protocol natively evolves.
To read the full article, click here
Related Semiconductor IP
- CXL 3.0 Controller
- CXL Controller IP
- CXL memory expansion
- CXL 3 Controller IP
- CXL 4.0/3.2/3/2 Verification IP
Related Articles
- CXL Topology-Aware and Expander-Driven Prefetching: Unlocking SSD Performance
- cMPI: Using CXL Memory Sharing for MPI One-Sided and Two-Sided Inter-Node Communications
- The Hitchhiker's Guide to Programming and Optimizing CXL-Based Heterogeneous Systems
- Understanding Timing Correlation Between Sign-off Tool and Circuit Simulation
Latest Articles
- FPGA-Accelerated RISC-V ISA Extensions for Efficient Neural Network Inference on Edge Devices
- MultiVic: A Time-Predictable RISC-V Multi-Core Processor Optimized for Neural Network Inference
- AnaFlow: Agentic LLM-based Workflow for Reasoning-Driven Explainable and Sample-Efficient Analog Circuit Sizing
- FeNN-DMA: A RISC-V SoC for SNN acceleration
- Multimodal Chip Physical Design Engineer Assistant