PLDA and AnalogX Acquisitions Supercharge the Rambus CXL Memory Interconnect Initiative
Big changes are coming to the data center driven by an exponential rise in data volume and traffic. Disaggregation and composability would move us beyond the classic architecture of the server as the unit of computing. By separating the functional components of compute, memory, storage and networking into pools, composed on-demand to match the specific requirements of varied workloads, greater performance, efficiency and total cost of ownership (TCO) could be achieved.
Compute Express LinkTM (CXLTM), supported by a broad consortium of hyperscalers, equipment OEMs, chip makers and IP suppliers, has emerged as a new enabling technology for interconnecting computing resources. CXL, now at the 2.0 generation, makes possible high-speed, low-latency links with memory cache coherency between processors, accelerators, NICs, memory and storage. It leverages PCI Express® 5.0 (PCIe 5.0) for its physical layer harnessing the standard’s tremendous momentum and industry knowledge base.
Today, Rambus announced the launch of our CXL Memory Interconnect Initiative, spearheading research and development of solutions for a new era of data center architecture. Concurrently, we announced the acquisitions of PLDA and AnalogX to supercharge this initiative. PLDA and AnalogX bring products and engineering talent that expand our leading IP portfolio for CXL 2.0 and PCIe 5.0, accelerate our roadmap for next-generation CXL 3.0 and PCIe 6.0 solutions, and provide key building blocks for CXL memory interconnect chips.
Two compelling use models enabled by CXL technology are memory expansion and memory pooling. The former offers the flexible addition of more memory capacity to a processor beyond that of its main memory channels. Memory pooling enables a many-to-many connection between hosts (processors) and devices (memory nodes) so the amount of capacity available to a processor could be both greatly expanded for and finely tailored to its current workload. When no longer needed, this memory can be released back to the pool. Promising more performance, higher efficiency and greater TCO, memory pooling moves us toward a fully disaggregated and composable architecture.
The CXL Memory Interconnect Initiative is the latest chapter in Rambus’ 30+ year history of advancing the leading edge of computing performance. It will leverage our expertise in memory and SerDes subsystems, semiconductor and network security, high-volume memory interface chips, and compute system architectures. We’re excited to welcome aboard the teams from PLDA and AnalogX to join us in this endeavor to shape the future of the data center.
Related Semiconductor IP
Related Blogs
- Accelerating the CXL Memory Interconnect Initiative
- CXL 3.1: What's Next for CXL-based Memory in the Data Center
- Autonomous Vehicles: Memory Requirements & Deep Neural Net Limitations
- Shattering the neural network memory wall with Checkmate
Latest Blogs
- Why Choose Hard IP for Embedded FPGA in Aerospace and Defense Applications
- Migrating the CPU IP Development from MIPS to RISC-V Instruction Set Architecture
- Quintauris: Accelerating RISC-V Innovation for next-gen Hardware
- Say Goodbye to Limits and Hello to Freedom of Scalability in the MIPS P8700
- Why is Hard IP a Better Solution for Embedded FPGA (eFPGA) Technology?