Optimizing AI models for Arm Ethos-U NPUs using the NVIDIA TAO Toolkit
Optimizations achieve up to 4X increase in inference throughput with 3X memory reduction
The proliferation of AI at the edge offers several advantages including decreased latency, enhanced privacy, and cost-efficiency. Arm has been at the forefront of this development, with a focus on delivering advanced AI capabilities at the edge across its Cortex-A and Cortex-M CPUs and Ethos-U NPUs. However, this space continues to expand rapidly, presenting challenges for developers looking to enable easy deployment on billions of edge devices.
One such challenge is to develop deep learning models for edge devices, since developers need to work with limited resources such as storage, memory and computing power, and still balance good model accuracy and run-time metrics such as latency or frame rate. An off-the-shelf model designed for a more powerful platform may be slow or not running at all when deployed on a more resource-constraint platform.
To read the full article, click here
Related Semiconductor IP
- CXL 3 Controller IP
- PCIe GEN6 PHY IP
- FPGA Proven PCIe Gen6 Controller IP
- Real-Time Microcontroller - Ultra-low latency control loops for real-time computing
- AI inference engine for real-time edge intelligence
Related Blogs
- Reviewing different Neural Network Models for Multi-Agent games on Arm using Unity
- Arm Ethos-U85: Addressing the High Performance Demands of IoT in the Age of AI
- Develop Software for the Cortex-M Security Extensions Using Arm DS and Arm GNU Toolchain
- The Importance of Memory Architecture for AI SoCs
Latest Blogs
- Arm Compute Platform at the Heart of Malaysia’s Silicon Vision
- IEEE 802.1ASdm-2024 Becomes an IEEE Standard – Advancing Time-Sensitive Networking
- Introducing the MIPS Atlas Portfolio for Physical AI
- Real-Time Intelligence for Physical AI at the Edge
- Moving the World with MIPS M8500 Real-Time Compute Solutions