High Bandwidth Memory (HBM) at the AI Crossroads: Customization or Standardization?

As artificial intelligence (AI) reshapes industries and advances technological frontiers, its success hinges on advanced memory capabilities. Leading this transformation is High Bandwidth Memory (HBM), which offers unparalleled speeds and efficiencies.

HBM is a revolutionary technology that stacks memory dies vertically and interconnects them using Through-Silicon Vias (TSVs). This architecture shortens the distance signals must travel between dies, delivering significantly higher bandwidth and lower power consumption than legacy memory technologies.

HBM is well suited for data-intensive applications such as AI, graphics processing, and high-performance computing (HPC). It is also rapidly evolving to meet ever-growing processing and memory requirements.

As new use cases emerge, critical questions must be answered:

  • Should HBM technologies continue to follow established standards for broad compatibility and scalability?
  • Should it be fast-tracked and customized to address specific use case requirements and time-to-market targets?
  • Or is there a third path — one that allows for customization, but doesn’t introduce a patchwork of incompatible technologies?

Industry experts from AWS, Marvell, Samsung, SK Hynix, and Synopsys discussed these questions and offered their insights at the inaugural Synopsys Executive Forum.

Left to right: Will Townsend (moderator), Nafea Bshara (AWS), Will Chu (Marvell), Harry Yoon (Samsung), Hoshik Kim (SK Hynix), and John Koeter (Synopsys) at Synopsys Executive Forum

The customization conundrum

HBM’s revolutionary design facilitates unprecedented data transfer speeds, vastly improving performance for demanding applications. Customized HBM (often referred to as cHBM) can push these efficiencies even further, enabling self-driving cars to make faster real-time decisions and AI-driven data centers to operate at unmatched levels.

“The demand and requirements for custom HBM will continue to rise,” said Harry Yoon, corporate EVP of products and solutions planning at Samsung. “We anticipate that the custom HBM market share will exceed 50% in the near future.”

Custom approaches don’t always follow industry standards, however, and come with their own set of challenges and risks. Multiple competing iterations of HBM — or any technology — can result in a fragmented landscape of proprietary solutions, each optimized for a narrow set of applications and lacking broad compatibility.

Nafea Bshara, VP and distinguished engineer at AWS, raised concerns about scalability and the potential stifling of innovation if the industry were to lean too heavily on custom solutions. According to Bshara, customization risks “shutting the door on the other players.”

A standardized approach, he said, helps ensure new entrants and smaller players can innovate and compete. Furthermore, standards allow manufacturers to produce memory chips in larger volumes, reducing per-unit costs. HBM can then become more accessible and cost-effective for a wide range of applications. 

Choice, interoperability, and innovation

As a representative of AWS — a major purchaser of memory products — Bshara said standards allow his company to choose from the best products available.

“We love all our memory vendors, but every year, one of them is good, and one of them is second,” he said, highlighting the value of choice without fear of future interoperability concerns.

While the panelists acknowledged the need for industry standards, they noted the pace of standards development and evolution typically lags behind a market clamoring for speed.

“[Custom HBM] only exists because the standards can’t keep up,” said Will Chu, SVP and GM of the Custom Cloud Solutions Business Unit at Marvell. “I’m hoping over time, as an industry, we can break down all the different things that we want to put in something ‘custom’ and then make it more standard.”

John Koeter, SVP of IP at Synopsys, agreed.

“We need to find a way to standardize the customizable aspects of HBM so that we can meet the diverse needs of various applications,” Koeter said, “without stifling innovation.”

Finding the middle ground

The panelists concluded the industry must focus on developing a framework that allows for customization within a standardized structure. This would include a common set of interfaces and protocols that can then be tailored to meet the specific needs of customers — without deviating from a standardized baseline.

According to the panelists, this type of approach would ensure custom HBM solutions do not become isolated, proprietary technologies — but instead contribute to a broader, more inclusive ecosystem that works together to overcome critical challenges.

“[We must] address the challenges of power and thermal issues,” said Hoshik Kim, SVP and fellow of memory systems research at SK Hynix, “while maintaining a degree of standardization that allows for interoperability.”

Encouraging industry collaboration

Despite the debate between customization and standardization, the panel was bullish about the future of HBM and interoperability.

“The future is extremely bright for this technology,” Koeter said. “It is likely to involve a blend of custom and standardized HBM. And collaboration among memory suppliers, IP companies, and foundries is crucial.”

Industry cooperation will ensure HBM solutions can support a growing set of needs and AI-driven workloads, he said, without hindering innovation or standardization. Fortunately, these efforts are well underway.

“The amount of collaboration I've seen in the industry — I haven't seen it in 25 years,” Bshara noted.

Note: This article contains statements made during a panel discussion at Synopsys Executive Forum, held March 19, 2025, in Santa Clara, California.

On-demand video: Custom High Bandwidth Memory (HBM) for AI

×
Semiconductor IP