The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer’s model to enable virtually any class of neural network architecture and use case.
Our unique differentiation starts with the ability to simultaneously execute multiple AI/ML models significantly expanding the realm of capability over existing approaches. This game-changing advantage is provided by the co-developed NeuroMosAIc Studio software’s ability to dynamically allocate HW resources to match the target workload resulting in highly optimized, low-power execution. The designer may also select the optional on-device training acceleration extension enabling iterative learning post-deployment. This key capability cuts the cord to cloud dependence while elevating the accuracy, efficiency, customization, and personalization without reliance on costly model retraining and deployment, thereby extending device lifecycles.
Deeply Embedded AI Accelerator for Microcontrollers and End-Point IoT Devices
Overview
Key Features
- Performance: Up to 1 TOPs
- MACs (8x8): 64, 128, 256, 512
- Data Types: 1-bit, INT8, INT16
- Internal SRAM: Up to 1MB
- AXI x3 interfaces
Benefits
- The NMP-350 extends the low power leadership of the NeuroMosAIc Processor family delivering 60% higher performance efficiency while maintaining concurrent multimodal inferencing matched to modern sensors and embedded systems. Architecture changes include doubling the core count per tile enabling up to 1 TOPS while improving utilization and lowering area by 25%.
- An upgraded RISC-V controller delivers 4x initialization and post-processing performance with the option of swapping with an Arm Cortex-M. Activation function support is expanded to align with state-of-the-art neural network architectures.
- Four MAC unit configurations enable drop-in compatibility with area-constrained systems from sensors and MCUs to dedicated functions within complex SoCs.
Block Diagram
Applications
- Driver Authentication, Digital Mirrors, and Personalization
- Predictive Maintenance
- Machine Automation
- Health Monitoring
Technical Specifications
Maturity
Production Proven
Availability
Publicly Licensable
Related IPs
- Lowest Cost and Power AI Accelerator for End Point Devices
- Performance Efficiency AI Accelerator for Mobile and Edge Devices
- Performance Efficiency Leading AI Accelerator for Mobile and Edge Devices
- Integrated Secure Element (iSE) for industrial IoT, factory automation, and AI devices
- Security Protocol Accelerator for SM3 and SM4 Ciphers
- ARC HS36x2 dual-core version of HS36 with I and D cache for high-performance embedded applications