Adapting Foundation IP to Exceed 2 nm Power Efficiency in Next-Gen Hyperscale Compute Engines
Competing in the booming data center chip market often comes down to one factor: power efficiency. The less power a CPU, GPU, or AI accelerator requires to produce results, the more processing it can offer within a given power budget.
With data centers and their commensurate power needs growing exponentially, the energy consumption of each chip directly impacts the enormous costs of running gigawatt-scale AI data centers, where power and cooling account for 40–60% of operational expenditures.
To reduce the energy consumption of their workloads and gain a competitive edge, one software and cloud computing titan has made the strategic bet to design their own next-gen hyperscale System-on-Chip (SoC). By combining the advantages of new 2 nm-class process nodes with advanced, customized chip design techniques, the company is doubling down on the belief that innovation spanning process, design, and architecture can unlock new levels of power and cost efficiency.

Power play
To offer a compelling alternative in the market, the company knew that any new 2 nm design must push beyond the performance and efficiency process entitlement already baked into the scaling factors of the latest transistor fabrication methods. The transition to 2 nm process is expected to provide 25–30% power reduction relative to the previous 3 nm node.
The company set an ambitious goal of achieving an additional 5% improvement on the 2 nm baseline. Through close collaboration with Synopsys — combining EDA software flow enhancements with our optimized Foundation IP logic library — the company exceeded their goal, achieving:
- 7.34% reduced power consumption with the same baseline flow.
- 9.51% reduced power consumption with an optimized flow.
- 7.5% silicon area advantage over baseline with ISO performance.
The company also evaluated our 2 nm embedded memories, which exceeded SRAM scaling expectations compared to our 3 nm product. On average, the 2 nm memory instances delivered 12% higher speed, occupied 8% less area, and consumed 12% less power than their 3 nm counterparts.
Expert collaboration
Because the transition to 2 nm comes with a shift from FinFET to GAA architecture, the company’s SoC developers faced a particularly steep learning curve, with an increase in complexity and technology assimilation.
They engaged our team in the early stages of the project — the byproduct of a trusted working relationship that spans more than four generations of AI chip designs — and even licensed our Foundation IP prior to the availability of any silicon reports.
The company used our IP, reference methodology, and Fusion Compiler tool to explore all commercially available options for achieving their power budget requirements. While the early development cycles produced the silicon area advantage, they did not achieve the power scaling targets the company sought.
Adaptation and optimization
Seeking additional assistance, the company inquired whether our EDA tools and IP could be leveraged to push the design’s performance further.
R&D experts from our IP and EDA groups began collaborating on the design. Starting with the standard logic libraries, the IP group worked closely with the company’s designers to adapt and optimize the libraries with new cells and updated modeling. Over several iterations, the teams delivered the 7.34% power benefit, with Synopsys PrimePower used for final power analysis.
Our Technology and Product Development Group then helped the company take it a step further. By developing new algorithms for Fusion Compiler, and after many trials based on the latest recommended power recipe, design flow optimizations produced the 9.51% combined power benefit.
At the same time, our application engineers worked closely with the company to provide the best solution from our broad portfolio of memory compilers. Weighing performance requirements with power and area targets, we were able to extend the benefit of 2 nm beyond instance-level scaling. In one key scenario, power was reduced an additional 25% by using an alternative configuration that met the 2 nm requirements.
Team efforts
Developing a hyperscale SoC on the most advanced process node pushes engineering teams harder than ever — and for the company, the stakes couldn’t be higher. It’s not just about the investment; it’s about securing a competitive edge in a fast-moving market. With so much riding on power targets, we’re proud the company trusted our proven expertise to help deliver their next-gen SoCs.
Related Semiconductor IP
- TSMC N2P 0.75V/1.2V GPIO MS add-on
- TSMC N2P 0.75V/1.2V GPIO
- TSMC N2P 1.2V IO Platform supporting cells MS add-on
- TSMC N2P 1.2V IO Platform supporting cells
- TSMC N2P 1.2V Fail-safe GPIO
Related Blogs
- How much SRAM proportion could be integrated in SoC at 20 nm and below?
- Utilizing CXL 2.0 IP in the Defense Sector: A Revolution in Secure Computing
- XConn Revitalizes Next-Gen Data Centers with CXL 2.0 Switch Designed with Synopsys IP
- Design IP Market Increased by All-time-high: 20% in 2024!
Latest Blogs
- CAVP-Validated Post-Quantum Cryptography
- The role of AI processor architecture in power consumption efficiency
- Evaluating the Side Channel Security of Post-Quantum Hardware IP
- A Golden Source As The Single Source Of Truth In HSI
- ONFI 5.2: What’s new in Open NAND Flash Interface's latest 5.2 standard