The Evolution of HBM
By Archana Cheruliyil, senior product marketing manager at Alphawave Semi
High-bandwidth memory originally was conceived as a way to increase capacity in memory attached to a 2.5D package. It has since become a staple for all high-performance computing, in some cases replacing SRAM for L3 cache. Archana Cheruliyil, senior product marketing manager at Alphawave Semi, talks with Semiconductor Engineering about how and where HBM is used today, how it will be used in the future, why it is essential for AI systems, and how the new HBM4 standard and custom HBM will impact PPA.
Related Semiconductor IP
- Simulation VIP for HBM
- HBM Synthesizable Transactor
- HBM DFI Synthesizable Transactor
- HBM Memory Model
- HBM DFI Verification IP
Related Videos
- Managing the Massive Data Throughput: AI-Based Designs and The Value of NoC Tiling
- Analog AI Chips for Energy-Efficient Machine Learning: The Future of AI Hardware?
- Ask the Experts: The State of AI
- Weebit Nano is scaling up for 2026: The Year of the Tornado