AI Inference Processor IP
Filter
Compare
18
IP
from 13 vendors
(1
-
10)
-
AI inference processor IP
- High Performance, Low Power Consumption, Small Foot Print IP for Deep Learning inference processing.
-
AI Accelerator: Neural Network-specific Optimized 1 TOPS
- Performance efficient 18 TOPS/Watt
- Capable of processing real-time HD video and images on-chip
- Advanced activation memory management
-
IP cores for ultra-low power AI-enabled devices
- Ultra-fast Response Time
- Zero-latency Switching
- Low Power
-
High-Performance NPU
- Low Power Consumption
- High Performance
- Flexibility and Configurability
- High-Precision Inference
-
Low-power high-speed reconfigurable processor to accelerate AI everywhere.
- Multi-Core Number: 4
- Performance (INT8, 600MHz): 0.6TOPS
- Achievable Clock Speed (MHz): 600 (28nm)
- Synthesis Logic Gates (MGates): 2
-
Machine Learning Processor
- Partner Configurable
- Extremely Small Area
- Single Toolchain
-
Prodigy IoT/Edge Licensable Hardware IP
- TPU AI/ML Inference IP Architecture
-
AI Inference IP. Ultra-low power, tiny, std CMOS. ~ 100K parameter RNN
- Ultra Low power
- Standard CMOS
- Small area
-
Highly Scalable and Efficient Second-Generation ML Inference Processor
- Increased Performance
- Improved Efficiency
- Extended Configurability
-
High-Efficiency, Low-Area ML Inference Processor
- High Efficiency
- Lowest Area
- Optimized Design
- Futureproof