Neural Decision Processor IP
Filter
Compare
6
IP
from 4 vendors
(1
-
6)
-
NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
- Support wide range of activations & weights data types, from 32-bit Floating Point down to 2-bit Binary Neural Networks (BNN)
-
High-Performance NPU
- Low Power Consumption
- High Performance
- Flexibility and Configurability
- High-Precision Inference
-
High Performance Scalable Sensor Hub DSP Architecture
- Self contained, specialized sensor hub on-device processor
- Unifies multi-sensor processing with AI and sensor fusion
- Highy-configurable 8-way VLIW architecture
-
High-performance 32-bit multi-core processor with AI acceleration engine
- Instruction set: T-Head ISA (32-bit/16-bit variable-length instruction set);
- Multi-core: Isomorphic multi-core, with 1 to 4 optional cores;
- Pipeline: 12-stage;
- Microarchitecture: Tri-issue, deep out-of-order;
-
Enhanced Neural Processing Unit for functional safety providing 49,152 MACs/cycle of performance for AI applications
- Scalable real-time AI / neural processor IP with up to 3,500 TOPS performance
-
Enhanced Neural Processing Unit for safety providing 32,768 MACs/cycle of performance for AI applications
- Adds hardware safety features to NPX6 NPU, minimizing area and power impact
- IEEE 754-compliant vector floating point unit option offers single precision or half precision operations and advanced math functions
- Supports ISO 26262 automotive safety standard
- IP targets ASIL D compliance to ISO-26262: 2018