The Age of AI Demands Faster Chip Development: Only Arm and Cadence Deliver
Strategic collaboration accelerates Custom Silicon for evolving AI workloads
As AI continues its rapid evolution, optimized silicon is crucial to unlock next-generation applications. Arm serves as a foundation for this innovation with its CPU, GPU and related technologies and pioneering solutions like the Arm Neoverse Compute Subsystems (CSS), introduced earlier this year.
CSS are validated and performance-optimized subsystems – building blocks seamlessly integrated into systems-on-chip (SoCs) – designed to mitigate risk, reduce non-recurring engineering (NRE) costs, and expedite the time to market. Neoverse CSS gives partners the flexibility needed to take these building blocks, tailor them for cutting-edge process nodes, and enable access to custom acceleration for new AI applications. Considering the increased complexity of CSS and the level of expertise needed to assemble a competitive CSS, software is a critical component of each delivery, from drivers all the way to the application layer, with partner-specific workloads used to optimize performance and power.
This process involves taking Arm Neoverse platform IP and refining it for enhanced performance, power efficiency, and area optimization, using a state-of-the-art foundry processes. This initiative is an integral part of Arm Total Design, an ecosystem program designed to smooth and speed delivery of customized SoCs, a critical aspect in the era of AI.
To read the full article, click here
Related Semiconductor IP
- Motorola MC6845 Functional Equivalent CRT Controller
- Display Controller – Ultra HD LCD / OLED Panels (AXI4/AXI Bus)
- Display Controller – LCD / OLED Panels (Avalon Bus)
- High-Performance Memory Expansion IP for AI Accelerators
- General use, integer-N 4GHz Hybrid Phase Locked Loop on TSMC 28HPC
Related Blogs
- Arm Ethos-U85: Addressing the High Performance Demands of IoT in the Age of AI
- Navigating the Future of EDA: The Transformative Impact of AI and ML
- DDR5 12.8Gbps MRDIMM IP: Powering the Future of AI, HPC, and Data Centers
- The Evolution of Generative AI up to the Model-Driven Era