Arm’s second-generation, highly scalable and efficient NPU, the Ethos-N78 enables new immersive applications with a 2.5x increase in single-core performance now scalable from 1 to 10 TOP/s and beyond through many-core technologies. It provides flexibility to optimize the ML capability with 90+ configurations.
Highly Scalable and Efficient Second-Generation ML Inference Processor
Overview
Key Features
- Increased Performance
- Improves user experience with 2.5x increased single-core performance scalable from 1 to 10 TOP/s and beyond through many-core technologies.
- Improved Efficiency
- Up to 40 percent lower DRAM bandwidth (MB/Infr) and up to 25 percent increase in efficiency (inf/s/mm2) enables demanding neural networks to be run in diverse solutions.
- Extended Configurability
- Target multiple markets with the flexibility to optimize ML capabilities with 90+ configurations and the Ethos-N Static Performance Analyzer.
- Unified Software and Tools
- Develop, deploy, and debug with the Arm AI platform using online or offline compilation and Arm Development Studio 5 Streamline.
Technical Specifications
Related IPs
- Highly powerful and scalable multi-mode communication processor for IoT wireless applications
- ML Inference Processor with Balanced Efficiency and Performance
- High-Efficiency, Low-Area ML Inference Processor
- Highly scalable inference NPU IP for next-gen AI applications
- Embedded Configuration and Test Processor
- Perspective Transformation and Lens Correction Image Processor