AI Inference Processor IP
Filter
Compare
21
IP
from 13 vendors
(1
-
10)
-
AI inference processor IP
- High Performance, Low Power Consumption, Small Foot Print IP for Deep Learning inference processing.
-
Highly Scalable and Efficient Second-Generation ML Inference Processor
- Increased Performance
- Improved Efficiency
- Extended Configurability
-
High-Efficiency, Low-Area ML Inference Processor
- High Efficiency
- Lowest Area
- Optimized Design
- Futureproof
-
ML Inference Processor with Balanced Efficiency and Performance
- Balanced Performance
- Optimized Design
- High Efficiency
-
Low-power high-speed reconfigurable processor to accelerate AI everywhere.
- Multi-Core Number: 4
- Performance (INT8, 600MHz): 0.6TOPS
- Achievable Clock Speed (MHz): 600 (28nm)
- Synthesis Logic Gates (MGates): 2
-
AI Accelerator
- Independent of external controller
- Accelerates high dimensional tensors
- Highly parallel with multi-tasking or multiple data sources
- Optimized for performance / power / area
-
Ultra low power inference engine
- Neuromorphic processor
- Sub milliwatt power
- Ultra-low power AI processing
-
Highly scalable inference NPU IP for next-gen AI applications
- ENLIGHT Pro is meticulously engineered to deliver enhanced flexibility, scalability, and configurability, enhancing overall efficiency in a compact footprint.
- ENLIGHT Pro supports the transformer model, a key requirement in modern AI applications, particularly Large Language Models (LLMs). LLMs are instrumental in tasks such as text recognition and generation, trained using deep learning techniques on extensive datasets.
-
Super low-power and high accuracy AI processing engine for Wake Word, Voice Commands, Acoustic Event Detection, Speaker ID and Sensors
- Voice control and Context detection
- High Accuracy in Noisy Conditions
-
AI Accelerator: Neural Network-specific Optimized 1 TOPS
- Performance efficient 18 TOPS/Watt
- Capable of processing real-time HD video and images on-chip
- Advanced activation memory management