NPU AI processor IP
Filter
Compare
15
IP
from 9 vendors
(1
-
10)
-
High-Performance NPU
- Low Power Consumption
- High Performance
- Flexibility and Configurability
- High-Precision Inference
-
IP cores for ultra-low power AI-enabled devices
- Ultra-fast Response Time
- Zero-latency Switching
- Low Power
-
Highly scalable performance for classic and generative on-device and edge AI solutions
- Flexible System Integration
- Scalable Design and Configurability
- Efficient in Mapping State-of-the-Art AI/ML Workloads
-
Neural network processor designed for edge devices
- High energy efficiency
- Support mainstream deep learning frameworks
- Low power consumption
- An integrated AI solution
-
HBM3 PHY IP at 7nm
- Unbeatable performance-driven and low-power-driven PPA
- Ultra-low read/write latency with programmable PHY boundary timing
-
GDDR6 PHY IP for 12nm
- JEDEC JESD250 compliant GDDR6 support
- X16 mode, X8 mode, and pseudo-channel mode
- Low frequency RDQS mode support
-
NPU IP for Embedded AI
- Fully programmable to efficiently execute Neural Networks, feature extraction, signal processing, audio and control code
- Scalable performance by design to meet wide range of use cases with MAC configurations with up to 64 int8 (native 128 of 4x8) MACs per cycle
- Future proof architecture that supports the most advanced ML data types and operators
-
Highly scalable inference NPU IP for next-gen AI applications
- Matrix Multiplication: 4096 MACs/cycles (int 8), 1024 MACs/cycles (int 16)
- Vector processor: RISC-V with RVV 1.0
- Custom instructions for softmax and local storage access
-
NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
- Support wide range of activations & weights data types, from 32-bit Floating Point down to 2-bit Binary Neural Networks (BNN)
-
AI Accelerator: Neural Network-specific Optimized 1 TOPS
- Performance efficient 18 TOPS/Watt
- Capable of processing real-time HD video and images on-chip
- Advanced activation memory management