AI Accelerator IP
Filter
Compare
66
IP
from
33
vendors
(1
-
10)
-
AI accelerator
- Massive Floating Point (FP) Parallelism: To handle extensive computations simultaneously.
- Optimized Memory Bandwidth Utilization: Ensuring peak efficiency in data handling.
-
AI Accelerator Specifically for CNN
- A specialized hardware with controlled throughput and hardware cost/resources, utilizing parameterizeable layers, configurable weights, and precision settings to support fixed-point operations.
- This hardware aim to accelerate inference operations, particulary for CNNs such as LeNet-5, VGG-16, VGG-19, AlexNet, ResNet-50, etc.
-
Low power AI accelerator
- Complete speech processing at less than 100W
- Able to run time series nerworks for signal and speech
- 10X more efficient than traditional NNs
-
High-Performance Edge AI Accelerator
- Performance: Up to 16 TOPs
- MACs (8x8): 4K, 8K
- Data Types: 1-bit, INT8, INT16
-
Performance Efficiency Leading AI Accelerator for Mobile and Edge Devices
- Performance: Up to 4 TOPs
- MACs (8x8): 512, 1K, 2K
- Data Types: 1-bit, INT8, INT16
-
Deeply Embedded AI Accelerator for Microcontrollers and End-Point IoT Devices
- Performance: Up to 1 TOPs
- MACs (8x8): 64, 128, 256, 512
- Data Types: 1-bit, INT8, INT16
-
Performance Efficiency AI Accelerator for Mobile and Edge Devices
- Performance: Up to 4 TOPs
- MACs (8x8): 512, 1K, 2K
- Data Types: 1-bit, INT8, INT16
-
Performance AI Accelerator for Edge Computing
- Performance: Up to 16 TOPs
- MACs (8x8): 4K, 8K
- Data Types: 1-bit, INT8, INT16
- Internal SRAM: Up to 16 MB
-
Lowest Cost and Power AI Accelerator for End Point Devices
- Performance: Up to 512 GOPs
- MACs (8x8): 64, 128, 256
- Data Types: 1-bit, INT8, INT16
-
AI Processor Accelerator
- Universal Compatibility: Supports any framework, neural network, and backbone.
- Large Input Frame Handling: Accommodates large input frames without downsizing.