The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer’s model to enable virtually any class of neural network architecture and use case.
Our unique differentiation starts with the ability to simultaneously execute multiple AI/ML models significantly expanding the realm of capability over existing approaches. This game-changing advantage is provided by the co-developed NeuroMosAIc Studio software’s ability to dynamically allocate HW resources to match the target workload resulting in highly optimized, low-power execution. The designer may also select the optional on-device training acceleration extension enabling iterative learning post-deployment. This key capability cuts the cord to cloud dependence while elevating the accuracy, efficiency, customization, and personalization without reliance on costly model retraining and deployment, thereby extending device lifecycles.
Lowest Cost and Power AI Accelerator for End Point Devices
Overview
Key Features
- Performance: Up to 512 GOPs
- MACs (8x8): 64, 128, 256
- Data Types: 1-bit, INT8, INT16
- Internal SRAM: Up to 512 KB
Block Diagram

Applications
- Driver Authentication, Digital Mirrors, and Personalization
- Predictive Maintenance
- Machine Automation
- Health Monitoring
Technical Specifications
Maturity
Production Proven
Availability
Publicly Licensable
Related IPs
- Deeply Embedded AI Accelerator for Microcontrollers and End-Point IoT Devices
- Root of Trust - Foundational security in SoCs and FPGAs for IoT servers, gateways, edge devices and sensors
- Root of Trust - Foundational security in SoCs and FPGAs for Chinese IoT servers, gateways, edge devices and sensors
- Root of Trust - Foundational security for SoCs, secure MCU devices and sensors
- Performance Efficiency AI Accelerator for Mobile and Edge Devices
- Highly scalable performance for classic and generative on-device and edge AI solutions