AI Inference Accelerator IP
Filter
Compare
16
IP
from 12 vendors
(1
-
10)
-
NPU / AI accelerator with emphasis in LLM
- Programmable and Model-flexible
- Ecosystem Ready
-
AI accelerator
- Massive Floating Point (FP) Parallelism: To handle extensive computations simultaneously.
- Optimized Memory Bandwidth Utilization: Ensuring peak efficiency in data handling.
-
AI Accelerator Specifically for CNN
- A specialized hardware with controlled throughput and hardware cost/resources, utilizing parameterizeable layers, configurable weights, and precision settings to support fixed-point operations.
- This hardware aim to accelerate inference operations, particulary for CNNs such as LeNet-5, VGG-16, VGG-19, AlexNet, ResNet-50, etc.
-
AI Accelerator: Neural Network-specific Optimized 1 TOPS
- Performance efficient 18 TOPS/Watt
- Capable of processing real-time HD video and images on-chip
- Advanced activation memory management
-
AI Accelerator (NPU) IP - 3.2 GOPS for Audio Applications
- 3.2 GOPS
- Ultra-low <300uW power consumption
- Low latency
-
Edge AI Accelerator NNE 1.0
- Minimum efforts in system integration
- Speed up AI inference performance
- Super performance for power sensitive application
-
AI Accelerator
- Independent of external controller
- Accelerates high dimensional tensors
- Highly parallel with multi-tasking or multiple data sources
- Optimized for performance / power / area
-
High performance-efficient deep learning accelerator for edge and end-point inference
- Configurable MACs from 32 to 4096 (INT8)
- Maximum performance 8 TOPS at 1GHz
- Configurable local memory: 16KB to 4MB
-
High-Performance Memory Expansion IP for AI Accelerators
- Expand Effective HBM Capacity by up to 50%
- Enhance AI Accelerator Throughput
- Boost Effective HBM Bandwidth
- Integrated Address Translation and memory management:
-
Tensilica AI Max - NNA 110 Single Core
- Scalable Design to Adapt to Various AI Workloads
- Efficient in Mapping State-of-the-Art DL/AI Workloads
- End-to-End Software Toolchain for All Markets and Large Number of Frameworks