The State of HBM4 Chronicled at CES 2026
The sixth-generation HBM technology is all set to make waves in AI designs.
By Majeed Ahmad, EE Times | January 12, 2026

High-bandwidth memory, a critical component in modern AI systems—particularly for running large-scale training models—was a centerpiece at CES 2026, with the memory trio of Micron, Samsung, and SK hynix holding their HBM4 cards. It was all about telegraphing the readiness of HBM4 devices, which address the “memory wall” that threatens to plateau AI scaling.
HBM4 is promising a solution to the memory wall—the bottleneck where data processing speeds outpace the ability of memory to feed that data to the processor—by carrying out the most significant architectural overhaul of high-bandwidth memory technology. It is purpose-built for next-generation AI accelerators and data center workloads to deliver major gains in bandwidth, efficiency, and system-level customization.
To read the full article, click here
Related Semiconductor IP
- HBM4 PHY IP
- HBM4 Controller IP
- TSMC CLN3FFP HBM4 PHY
- HBM4 Memory Controller
- HBM4E PHY and controller
Related News
- Redefining the Cutting Edge: Innatera Debuts Real-World Neuromorphic Edge AI at CES 2026
- Two ways of looking at the U.S. government 10% stake in Intel
- NXP Completes Acquisitions of Aviva Links and Kinara to Advance Automotive Connectivity and AI at the Intelligent Edge
- M31 Debuts at ICCAD 2025, Empowering the Next Generation of AI Chips with High-Performance, Low-Power IP
Latest News
- PQSecure Collaborates with George Mason University on NIST Lightweight Cryptography Hardware Research
- Omni Design Technologies Advances 200G-Class Co-Packaged Optics IP Portfolio for Next-Generation AI Infrastructure
- Global Annual Semiconductor Sales Increase 25.6% to $791.7 Billion in 2025
- Fabless Startup Aheesa Tapes Out First Indian RISC-V Network SoC
- SmartDV and Mirabilis Design Announce Strategic Collaboration for System-Level Modeling of SmartDV IP