NPU Processor IP Cores
NPU (Neural Processing Unit) Processor IP cores provide high-performance computing power for tasks such as image recognition, natural language processing, and data analysis, enabling real-time AI processing at the edge.
All offers in
NPU Processor IP Cores
Filter
Compare
33
NPU Processor IP Cores
from 8 vendors
(1
-
10)
-
General Purpose Neural Processing Unit (NPU)
- Hybrid Von Neuman + 2D SIMD matrix architecture
- 64b Instruction word, single instruction issue per clock
- 7-stage, in-order pipeline
-
NPU IP for Embedded AI
- Fully programmable to efficiently execute Neural Networks, feature extraction, signal processing, audio and control code
- Scalable performance by design to meet wide range of use cases with MAC configurations with up to 64 int8 (native 128 of 4x8) MACs per cycle
- Future proof architecture that supports the most advanced ML data types and operators
-
NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
- Support wide range of activations & weights data types, from 32-bit Floating Point down to 2-bit Binary Neural Networks (BNN)
-
AI accelerator (NPU) IP - 1 to 20 TOPS
- Performance efficient 18 TOPS/Watt
- Scalable performance from 2-9K MACS
- Capable of processing HD images on chip
-
AI accelerator (NPU) IP - 32 to 128 TOPS
- Performance efficient 18 TOPS/Watt
- 36K-56K MACS
- Multi-job support
-
AI accelerator (NPU) IP - 16 to 32 TOPS
- Performance efficient 18 TOPS/Watt
- Scalable performance from 18K MACS
- Capable of processing HD images on chip
-
Highly scalable inference NPU IP for next-gen AI applications
- Matrix Multiplication: 4096 MACs/cycles (int 8), 1024 MACs/cycles (int 16)
- Vector processor: RISC-V with RVV 1.0
- Custom instructions for softmax and local storage access
-
AI Accelerator (NPU) IP - 3.2 GOPS for Audio Applications
- 3.2 GOPS
- Ultra-low <300uW power consumption
- Low latency
-
4-/8-bit mixed-precision NPU IP
- Easy customization at different core sizes and performance
- NN Converter converts a network file into an internal network format and supports ONNX (PyTorch), TF-Lite, and CFG (Darknet)
-
Enhanced Neural Processing Unit for safety providing 98,304 MACs/cycle of performance for AI applications
- Adds hardware safety features to NPX6 NPU, minimizing area and power impact
- Supports ISO 26262 automotive safety standard
- Supports CNNs, transformers, including generative AI, recommender networks, RNNs/LSTMs, etc
- IP targets ASIL B and ASIL D compliance to ISO 26262