HBM2E targets AI/ML training
Frank Ferro, Senior Director Product Management at Rambus, has written a detailed article for Semiconductor Engineering that explains why HBM2E is a perfect fit for Artificial Intelligence/Machine Learning (AI/ML) training. As Ferro points out, AI/ML growth and development are proceeding at a lighting pace. Indeed, AI training capabilities have jumped by a factor of 300,000 (10X annually) over the past 8 years. This trend continues to drive rapid improvements in nearly every area of computing, including memory bandwidth capabilities.
HBM: A Need for Speed
Introduced in 2013, High Bandwidth Memory (HBM) is a high-performance 3D-stacked SDRAM architecture.
“Like its predecessor, the second generation HBM2 specifies up to 8 memory die per stack, while doubling pin transfer rates to 2 Gbps,” Ferro explains. “HBM2 achieves 256 GB/s of memory bandwidth per package (DRAM stack), with the HBM2 specification supporting up to 8 GB of capacity per package.”
As Ferro notes, JEDEC announced the HBM2E specification in late 2018 to support increased bandwidth and capacity.
“With transfer rates rising to 3.2 Gbps per pin, HBM2E can achieve 410 GB/s of memory bandwidth per stack,” he explains. “In addition, HBM2E supports 12‑high stacks with memory capacities of up to 24 GB per stack.”
Related Semiconductor IP
- HBM2E PHY V2 - TSMC N5
- HBM2E PHY V2 - TSMC 7FF18
- HBM2E PHY V2 - TSMC 6FF18
- HBM2E PHY V2 (Hard 1) - TSMC 7FF18
- HBM2E PHY V2 (Hard 1) - TSMC 6FF18
Related Blogs
- ARM Targets Xeon
- Link Training: Establishing Link Communication Between DisplayPort Source and Sink Devices
- Ins and outs of SS Link Training in USB3.0
- CEVA-X1 DSP Core Targets Cellular IoT Opportunities
Latest Blogs
- UALink™ Shakes up the Scale-up AI Compute Landscape
- Scaling Out Deep Learning (DL) Inference and Training: Addressing Bottlenecks with Storage, Networking with RISC-V CPUs
- Cadence Transforms Chiplet Technology with First Arm-Based System Chiplet
- Redefining XPU Memory for AI Data Centers Through Custom HBM4 – Part 2
- Redefining XPU Memory for AI Data Centers Through Custom HBM4 – Part 1