Half of the Compute Shipped to Top Hyperscalers in 2025 will be Arm-based
Just over six years ago, we launched Arm Neoverse for the next-generation of cloud infrastructure, recognizing a world where delivering new levels of scalable performance on top of Arm’s flexible and power-efficient compute platform could enable a systemic shift in the capabilities and costs of the datacenter ecosystem.
Fast forward to today, the adoption of Neoverse has reached new heights: close to 50 percent of the compute shipped to top hyperscalers in 2025 will be Arm-based.
The cloud computing landscape is being fundamentally reshaped in the age of AI, with complex training and inference workloads driving insatiable computing needs and putting immense demands on cloud datacenters. AI servers are set to grow by more than 300 percent in the next few years, and for that to scale, power-efficiency is no longer a competitive advantage – it is a baseline industry requirement. Today, we’re designing datacenters in gigawatts, not megawatts – and in that world, power efficiency defines profitability. This is the same power-efficiency that has been part of the Arm DNA for the past 35 years.
Powering leading silicon solutions
With Neoverse, Arm has enabled a path for our market-leading partners to shape their silicon technology roadmap to match the evolving insights emerging from operating software at such scale – optimizing their entire datacenters to a degree previously impossible. This is why ten of the world’s largest hyperscalers are developing and deploying Arm-based chips into their datacenters.
Hyperscalers like Amazon Web Services (AWS), Google Cloud and Microsoft Azure have adopted the Arm compute platform to build their own general-purpose custom silicon to transform energy usage in the datacenter and cloud – in some cases, reporting up to 60 percent better efficiency compared to previous-generation chips. Arm enables them to uniquely optimize their silicon for their own infrastructure and their software workloads, both internal and hosted.
The Arm compute platform is additionally giving our partners the flexibility to create a new generation of customized, differentiated silicon solutions for AI. For example, NVIDIA’s Grace Blackwell superchip for AI-based infrastructure combines NVIDIA’s Blackwell GPU accelerated computing architecture with the Arm Neoverse-based Grace CPU, integrated with an extraordinarily high bandwidth, coherent mesh network – a system tailor made to achieve unmatched performance for AI workloads.
The demand for such tailor-made AI silicon is reflected in NVIDIA’s recent announcement that 3.6 million Blackwell chips have been ordered by the top four US-based hyperscale cloud providers alone. The Arm Neoverse-based Grace CPU is integral to meeting this high demand, providing the computational power and efficiency required for modern AI and datacenter applications.
Seamless software innovation
General-purpose chipsets from Arm’s partners, like AWS’ Graviton, Google Cloud’s Axion and Microsoft’s Cobalt, allow software developers to build their applications on Arm, and benefit from efficiency and performance optimizations. Some of the world’s most popular applications, including Paramount+, Spotify, and Uber, have migrated to Arm-based cloud infrastructure in a growing wave of momentum, motivated by significant total cost of ownership (TCO) and energy savings. Meanwhile, leading data platform companies, including Oracle and Salesforce, are also shifting their services to Arm-based infrastructure.
The world is building on Arm, from cloud to edge
With power-efficiency at the forefront, clouds and datacenters are now being designed with Arm first, from silicon to software. Arm is completely ubiquitous from cloud to edge – no other platform has scaled to this level, and this is unlocking entirely new possibilities for innovation, efficiency and capability. This is a world where Arm is at the core of all computing, with our compute platform defining the era of AI.
The future of computing is built on Arm.
Related Semiconductor IP
- 1.8V/3.3V I/O library with ODIO and 5V HPD in TSMC 16nm
- 1.8V/3.3V I/O Library with ODIO and 5V HPD in TSMC 12nm
- 1.8V to 5V GPIO, 1.8V to 5V Analog in TSMC 180nm BCD
- 1.8V/3.3V GPIO Library with HDMI, Aanlog & LVDS Cells in TSMC 22nm
- Specialed 20V Analog I/O in TSMC 55nm
Related Blogs
- Where Is The ARM-Based Netbook?
- AT&T To Launch First ARM-Based Netbook
- Could Agnilux Be Making ARM-Based Server Chips?
- Freescale's ARM-based Kinetis Is Released: Did ColdFire Just Get Deep-Freezed?
Latest Blogs
- Cadence Unveils the Industry’s First eUSB2V2 IP Solutions
- Half of the Compute Shipped to Top Hyperscalers in 2025 will be Arm-based
- Industry's First Verification IP for Display Port Automotive Extensions (DP AE)
- IMG DXT GPU: A Game-Changer for Gaming Smartphones
- Rivos and Canonical partner to deliver scalable RISC-V solutions in Data Centers and enable an enterprise-grade Ubuntu experience across Rivos platforms