CXL 3.1: What's Next for CXL-based Memory in the Data Center
Today (Nov. 14th, 2023) the CXL™ Consortium announced the continued evolution of the Compute Express Link™ standard with the release of the 3.1 specification. CXL 3.1, backward compatible with all previous generations, improves fabric manageability, further optimizes resource utilization, enables trusted compute environments, extends memory sharing and pooling to avoid stranded memory, and facilitates memory sharing between accelerators. When deployed, these improvements will boost the performance of AI and other demanding compute workloads.
Supercomputing 2023 (SC23), going on this week in Denver, provided the perfect backdrop for announcing this latest advancement in the CXL standard. At SC23, the Consortium is hosting demos from 16 ecosystem partners at the CXL pavilion (Booth #1301) including Rambus. There, we’re demonstrating the newly-announced Rambus CXL Platform Development Kit (PDK) performing memory tiering operations.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- New CXL 3.1 Controller IP for Next-Generation Data Centers
- What's Next For Motorola?
- What's it take to design DDR4 into your next SoC? Newly released DFI 3.0 Spec opens the flood gates for DDR4 design
- IP-SoC 2011: prepare the future, what's coming next after IP based design?
Latest Blogs
- Unlock early software development for custom RISC-V designs with faster simulation
- HBM4 Boosts Memory Performance for AI Training
- Using AI to Accelerate Chip Design: Dynamic, Adaptive Flows
- Locking When Emulating Xtensa LX Multi-Core on a Xilinx FPGA
- Design IP Market Increased by All-time-high: 20% in 2024!