Rambus Preps for HBM3
By Gary Hilson, EETimes (August 23, 2021)
Final specifications for High Bandwidth Memory (HBM) 3 haven’t been finalized, but that’s not preventing Rambus from laying the groundwork for its adoption, driven by the memory bandwidth requirements of for AI and machine learning model training.
The silicon IP vendor has released its HBM3-ready memory interface consisting of a fully integrated physical layer (PHY) and digital memory controller, the latter drawing on intellectual property from its recent acquisition of Northwest Logic.
The subsystem supports data rates of up to 8.4 Gbps, leveraging decades of experience in high-speed signaling expertise as well as 2.5D memory system architecture design and enablement, said Frank Ferro, senior director of product marketing for IP cores. By delivering 1 terabyte per second of bandwidth, Rambus’ HBM3-compliant memory interface is said to double the performance of high-end HBM2E memory subsystems.
To read the full article, click here
Related Semiconductor IP
Related News
- Rambus Boosts AI Performance with 9.6 Gbps HBM3 Memory Controller IP
- Rambus Advances AI/ML Performance with 8.4 Gbps HBM3-Ready Memory Subsystem
- Rambus Achieves Industry-Leading GDDR6 Performance at 18 Gbps
- Rambus Reports Third Quarter 2019 Financial Results
Latest News
- CAST Releases First Dual LZ4 and Snappy Lossless Data Compression IP Core
- Arteris Wins “AI Engineering Innovation Award” at the 2025 AI Breakthrough Awards
- SEMI Forecasts 69% Growth in Advanced Chipmaking Capacity Through 2028 Due to AI
- eMemory’s NeoFuse OTP Qualifies on TSMC’s N3P Process, Enabling Secure Memory for Advanced AI and HPC Chips
- AIREV and Tenstorrent Unite to Launch Advanced Agentic AI Stack