Edge AI Processor IP
Filter
Compare
25
IP
from 18 vendors
(1
-
10)
-
Complete Neural Processor for Edge AI
- Designed for Low-Power Neural Network Processing
- Flexible Training Methods
- Scalable Neuron Fabric
-
AI Processor Accelerator
- Universal Compatibility: Supports any framework, neural network, and backbone.
- Large Input Frame Handling: Accommodates large input frames without downsizing.
-
High-Performance Edge AI Accelerator
- Performance: Up to 16 TOPs
- MACs (8x8): 4K, 8K
- Data Types: 1-bit, INT8, INT16
-
Highly scalable performance for classic and generative on-device and edge AI solutions
- Flexible System Integration
- Scalable Design and Configurability
- Efficient in Mapping State-of-the-Art AI/ML Workloads
-
High-performance 32-bit multi-core processor with AI acceleration engine
- Instruction set: T-Head ISA (32-bit/16-bit variable-length instruction set);
- Multi-core: Isomorphic multi-core, with 1 to 4 optional cores;
- Pipeline: 12-stage;
- Microarchitecture: Tri-issue, deep out-of-order;
-
AI inference processor IP
- High Performance, Low Power Consumption, Small Foot Print IP for Deep Learning inference processing.
-
Performance AI Accelerator for Edge Computing
- Performance: Up to 16 TOPs
- MACs (8x8): 4K, 8K
- Data Types: 1-bit, INT8, INT16
- Internal SRAM: Up to 16 MB
-
Neural network processor designed for edge devices
- High energy efficiency
- Support mainstream deep learning frameworks
- Low power consumption
- An integrated AI solution
-
Performance Efficiency Leading AI Accelerator for Mobile and Edge Devices
- Performance: Up to 4 TOPs
- MACs (8x8): 512, 1K, 2K
- Data Types: 1-bit, INT8, INT16
-
Performance Efficiency AI Accelerator for Mobile and Edge Devices
- Performance: Up to 4 TOPs
- MACs (8x8): 512, 1K, 2K
- Data Types: 1-bit, INT8, INT16