Neural Network IP
Filter
Compare
110
IP
from 38 vendors
(1
-
10)
-
Compact neural network engine offering scalable performance (32, 64, or 128 MACs) at very low energy footprints
- Best-in-Class Energy
- Enables Compelling Use Cases and Advanced Concurrency
- Scalable IP for Various Workloads
-
Fusion Recurrent Neural Network (RNN) Accelerator
- MAC utilization up to 99%
- Energy efficiency 2.06 TOPS/W
- Peak performance can scale up to 204.8 GOPS
-
Convolutional Neural Network (CNN) Compact Accelerator
- Support convolution layer, max pooling layer, batch normalization layer and full connect layer
- Configurable bit width of weight (16 bit, 1 bit)
-
ARC NPX Neural Processing Unit (NPU) IP supports the latest, most complex neural network models and addresses demands for real-time compute with ultra-low power consumption for AI applications
- ARC processor cores are optimized to deliver the best performance/power/area (PPA) efficiency in the industry for embedded SoCs. Designed from the start for power-sensitive embedded applications, ARC processors implement a Harvard architecture for higher performance through simultaneous instruction and data memory access, and a high-speed scalar pipeline for maximum power efficiency. The 32-bit RISC engine offers a mixed 16-bit/32-bit instruction set for greater code density in embedded systems.
- ARC's high degree of configurability and instruction set architecture (ISA) extensibility contribute to its best-in-class PPA efficiency. Designers have the ability to add or omit hardware features to optimize the core's PPA for their target application - no wasted gates. ARC users also have the ability to add their own custom instructions and hardware accelerators to the core, as well as tightly couple memory and peripherals, enabling dramatic improvements in performance and power-efficiency at both the processor and system levels.
- Complete and proven commercial and open source tool chains, optimized for ARC processors, give SoC designers the development environment they need to efficiently develop ARC-based systems that meet all of their PPA targets.
-
Neural Network Processor IP
- TOPS (INT8) @1G: 3~4.5
- GFLOPS (32-bit)@1G:64
- GFLOPS (16-bit)@1G:256
- GOPS (32-bit)@1G:64
-
Neural network processor designed for edge devices
- High energy efficiency
- Support mainstream deep learning frameworks
- Low power consumption
- An integrated AI solution
-
DPU for Convolutional Neural Network
- Configurable hardware architecture
- Configurable core number up to three
- Convolution and deconvolution
-
PowerVR Neural Network Accelerator
- Flexible bit-depth data type support
- Lossless weight compression
- Advanced security enablement
-
Power efficient, high-performance neural network hardware IP for automotive embedded solutions
- Power efficient, high-performance
- For automotive embedded solutions
-
Neural Network Processor IP
- TOPS (INT8) @1G: 0.19