The Arm Evolution: From IP to Platform for the AI Era

By Rene Haas, CEO, Arm

The future of AI compute is being built on Arm.

AI continues to transform every major market — from the largest datacenters to the smallest devices, such as earbuds — intensifying the demands on compute. Our industry faces a fundamental challenge: delivering massive performance gains while keeping power consumption in check.

As AI workloads are added to rapidly evolving compute demands, power efficiency at the system-on-chip level is more important than ever. This is why we introduced Arm Compute Subsystems (CSS) across every one of our key marketsinfrastructure, client, automotive and the edge AI platform for IoT. There is simply no other way to design a system-on-chip.

These compute platforms deliver integrated, validated systems that provide faster time-to-market, better performance-per-watt, and scalable innovation. 

To better communicate these platforms to the outside world, we’re introducing a new product naming architecture:

  • Each compute platform will now have a clear identity for each key end market:
    • Arm Neoverse for infrastructure
    • Arm Niva for PC
    • Arm Lumex for mobile
    • Arm Zena for automotive
    • Arm Orbis for IoT
  • The Mali brand will continue as our GPU brand, with IP referenced as components within the platforms.
  • We are simplifying IP numbering by aligning it with platform generations and using names like Ultra, Premium, Pro, Nano, and Pico to show performance tiers — making it easier for developers and customers to navigate our roadmap.

This platform-first approach reflects the rapid conversion taking place to the Arm compute platform at the system level, not just the core IP. It allows our partners to integrate Arm’s technology faster, with higher confidence, and with less complexity — especially as they scale to meet the demands of AI.

We are very excited to announce this new naming to the world. The future of computing will be built on Arm.

×
Semiconductor IP