Rambus Unveils HBM4E Controller: 16 GT/s, 2,048-Bit Interface, Enabling C-HBM4E

By Anton Shilov, EE Times | April 4, 2026

Rambus has introduced one of the industry’s first memory controller IPs that support HBM4E memory, designed to handle data transfer rates of up to 16 GT/s to deliver bandwidth of 4 TB/s per HBM4E memory stack.

The controller IP supports various proprietary reliability, availability, and serviceability (RAS), along with telemetry capabilities designed to improve the reliability and efficiency of memory subsystems, the company said. It can be integrated into ASICs expected to emerge in 2027–2028, as well as custom HBM4E (C-HBM4E) base dies currently in development.

Rambus HBM4E memory controller

Rambus said its HBM4E memory controller can be integrated into a conventional ASIC and combined with a third-party HBM4 physical layer (PHY) implementation to build a complete memory subsystem, with HBM4 stacks communicating with the ASIC using an interposer. Alternatively, the controller can be integrated into emerging custom C-HBM4E base dies and works with HBM4E memory devices directly to save shoreline inside ASICs and lower power consumption. This flexibility enables Rambus to address accelerators with various memory-subsystem implementations.

The main selling point of the controller is its support for data transfer rates of up to 16 Gb/s per pin, enabling roughly 4 TB/s of memory bandwidth per HBM4E stack with a 2,048-bit interface. In large AI processors that integrate eight HBM stacks (such as Nvidia’s dual-chiplet B200, B300, and R200), this translates to a peak of 32 TB/s of aggregate memory bandwidth, which is dramatically higher than the 8 TB/s of aggregate bandwidth featured by Nvidia’s dual-chiplet B200 and B300 GPUs. As for maximum capacity, Rambus claimed its HBM4E controller is compliant with JEDEC’s HBM4E specification and can support up to 64 GB of memory, as defined by the standard.

To read the full article, click here

×
Semiconductor IP