Optimizing AI models for Arm Ethos-U NPUs using the NVIDIA TAO Toolkit
Optimizations achieve up to 4X increase in inference throughput with 3X memory reduction
The proliferation of AI at the edge offers several advantages including decreased latency, enhanced privacy, and cost-efficiency. Arm has been at the forefront of this development, with a focus on delivering advanced AI capabilities at the edge across its Cortex-A and Cortex-M CPUs and Ethos-U NPUs. However, this space continues to expand rapidly, presenting challenges for developers looking to enable easy deployment on billions of edge devices.
One such challenge is to develop deep learning models for edge devices, since developers need to work with limited resources such as storage, memory and computing power, and still balance good model accuracy and run-time metrics such as latency or frame rate. An off-the-shelf model designed for a more powerful platform may be slow or not running at all when deployed on a more resource-constraint platform.
To read the full article, click here
Related Semiconductor IP
- Root of Trust (RoT)
- Fixed Point Doppler Channel IP core
- Multi-protocol wireless plaform integrating Bluetooth Dual Mode, IEEE 802.15.4 (for Thread, Zigbee and Matter)
- Polyphase Video Scaler
- Compact, low-power, 8bit ADC on GF 22nm FDX
Related Blogs
- Reviewing different Neural Network Models for Multi-Agent games on Arm using Unity
- Arm Ethos-U85: Addressing the High Performance Demands of IoT in the Age of AI
- Develop Software for the Cortex-M Security Extensions Using Arm DS and Arm GNU Toolchain
- The Importance of Memory Architecture for AI SoCs
Latest Blogs
- Cadence Announces Industry's First Verification IP for Embedded USB2v2 (eUSB2v2)
- The Industry’s First USB4 Device IP Certification Will Speed Innovation and Edge AI Enablement
- Understanding Extended Metadata in CXL 3.1: What It Means for Your Systems
- 2025 Outlook with Mahesh Tirupattur of Analog Bits
- eUSB2 Version 2 with 4.8Gbps and the Use Cases: A Comprehensive Overview