Deep Learning IP
Filter
Compare
21
IP
from 18 vendors
(1
-
10)
-
High performance-efficient deep learning accelerator for edge and end-point inference
- Configurable MACs from 32 to 4096 (INT8)
- Maximum performance 8 TOPS at 1GHz
- Configurable local memory: 16KB to 4MB
-
Unified Deep Learning Processor
- Unified deep learning/vision/video architecture enables flexibility
- Low power extends battery life and prevents overheating
- Single scalable architecture
-
Deep Learning Accelerator
- NEUCHIPS Asymmetric Quantization
- NEUCHIPS Advanced Symmetric Quantization
- Patented memory architecture
- Integrated Interface: PCIe Gen3/4/5
-
High Performance Scalable Sensor Hub DSP Architecture
- Self contained, specialized sensor hub on-device processor
- Unifies multi-sensor processing with AI and sensor fusion
- Highy-configurable 8-way VLIW architecture
-
Network Security Crypto Accelerator
- Scalable architecture & crypto engines for optimal performance/resource usage
- Configurable for perfect application fit
- 100% CPU offload with low latency and high throughput
-
AI accelerator (NPU) IP - 32 to 128 TOPS
- Performance efficient 18 TOPS/Watt
- 36K-56K MACS
- Multi-job support
-
Highly scalable inference NPU IP for next-gen AI applications
- Matrix Multiplication: 4096 MACs/cycles (int 8), 1024 MACs/cycles (int 16)
- Vector processor: RISC-V with RVV 1.0
- Custom instructions for softmax and local storage access
-
Low-power high-speed reconfigurable processor to accelerate AI everywhere.
- Multi-Core Number: 4
- Performance (INT8, 600MHz): 0.6TOPS
- Achievable Clock Speed (MHz): 600 (28nm)
- Synthesis Logic Gates (MGates): 2
-
Edge AI Accelerator NNE 1.0
- Minimum efforts in system integration
- Speed up AI inference performance
- Super performance for power sensitive application
-
Fusion Recurrent Neural Network (RNN) Accelerator
- MAC utilization up to 99%
- Energy efficiency 2.06 TOPS/W
- Peak performance can scale up to 204.8 GOPS