Rambus Preps for HBM3
By Gary Hilson, EETimes (August 23, 2021)
Final specifications for High Bandwidth Memory (HBM) 3 haven’t been finalized, but that’s not preventing Rambus from laying the groundwork for its adoption, driven by the memory bandwidth requirements of for AI and machine learning model training.
The silicon IP vendor has released its HBM3-ready memory interface consisting of a fully integrated physical layer (PHY) and digital memory controller, the latter drawing on intellectual property from its recent acquisition of Northwest Logic.
The subsystem supports data rates of up to 8.4 Gbps, leveraging decades of experience in high-speed signaling expertise as well as 2.5D memory system architecture design and enablement, said Frank Ferro, senior director of product marketing for IP cores. By delivering 1 terabyte per second of bandwidth, Rambus’ HBM3-compliant memory interface is said to double the performance of high-end HBM2E memory subsystems.
To read the full article, click here
Related Semiconductor IP
- HBM3 PHY V2 (Hard) - TSMC N3P
- HBM3 PHY
- HBM3 PHY & Controller
- HBM3 PHY IP at 7/6nm
- TSMC CLN7FF HBM3 PHY
Related News
- Rambus Boosts AI Performance with 9.6 Gbps HBM3 Memory Controller IP
- Rambus Advances AI/ML Performance with 8.4 Gbps HBM3-Ready Memory Subsystem
- Enflame Technology Selects Rambus HBM2 Memory Subsystem Solution For Next-Generation AI Training Chip
- Rambus Reports Fourth Quarter and Fiscal Year 2019 Financial Results
Latest News
- Victor Peng Joins Rambus Board of Directors
- Arteris Announces Financial Results for the Fourth Quarter and Full Year 2025 and Estimated First Quarter and Full Year 2026 Guidance
- Arteris Network-on-Chip Technology Achieves Deployment Milestone of 4 Billion Chips and Chiplets
- RISC-V Pivots from Academia to Industrial Heavyweight
- Arteris Technology Deployed More Broadly by NXP to Accelerate Edge AI Leadership