Neural Accelerator AI Processor IP
Filter
Compare
23
IP
from 7 vendors
(1
-
10)
-
AI Processor Accelerator
- Universal Compatibility: Supports any framework, neural network, and backbone.
- Large Input Frame Handling: Accommodates large input frames without downsizing.
-
AI Accelerator
- Independent of external controller
- Accelerates high dimensional tensors
- Highly parallel with multi-tasking or multiple data sources
- Optimized for performance / power / area
-
High-Performance Edge AI Accelerator
- Performance: Up to 16 TOPs
- MACs (8x8): 4K, 8K
- Data Types: 1-bit, INT8, INT16
-
Performance Efficiency Leading AI Accelerator for Mobile and Edge Devices
- Performance: Up to 4 TOPs
- MACs (8x8): 512, 1K, 2K
- Data Types: 1-bit, INT8, INT16
-
Deeply Embedded AI Accelerator for Microcontrollers and End-Point IoT Devices
- Performance: Up to 1 TOPs
- MACs (8x8): 64, 128, 256, 512
- Data Types: 1-bit, INT8, INT16
-
Performance Efficiency AI Accelerator for Mobile and Edge Devices
- Performance: Up to 4 TOPs
- MACs (8x8): 512, 1K, 2K
- Data Types: 1-bit, INT8, INT16
-
Performance AI Accelerator for Edge Computing
- Performance: Up to 16 TOPs
- MACs (8x8): 4K, 8K
- Data Types: 1-bit, INT8, INT16
- Internal SRAM: Up to 16 MB
-
Lowest Cost and Power AI Accelerator for End Point Devices
- Performance: Up to 512 GOPs
- MACs (8x8): 64, 128, 256
- Data Types: 1-bit, INT8, INT16
-
Run-time Reconfigurable Neural Network IP
- Customizable IP Implementation: Achieve desired performance (TOPS), size, and power for target implementation and process technology
- Optimized for Generative AI: Supports popular Generative AI models including LLMs and LVMs
- Efficient AI Compute: Achieves very high AI compute utilization, resulting in exceptional energy efficiency
- Real-Time Data Streaming: Optimized for low-latency operations with batch=1
-
Neural engine IP - AI Inference for the Highest Performing Systems
- The Origin E8 is a family of NPU IP inference cores designed for the most performance-intensive applications, including automotive and data centers.
- With its ability to run multiple networks concurrently with zero penalty context switching, the E8 excels when high performance, low latency, and efficient processor utilization are required.
- Unlike other IPs that rely on tiling to scale performance—introducing associated power, memory sharing, and area penalties—the E8 offers single-core performance of up to 128 TOPS, delivering the computational capability required by the most advanced LLM and ADAS implementations.