Take advantage of the power of FPGA’s parallel processing to implement CNNs. This IP enables you to implement your own custom network or use many of the commonly used networks published by others.
Our IP provides the flexibility to adjust the number of acceleration engines. By adjusting the numbers of engines and allocated memory, users can trade speed of operation with FPGA’s capacity to obtain the best match for their application.
The CNN Accelerator IP is paired with the Lattice Neural Network Complier Tool. The compiler takes the networks developed in Caffe or TensorFlow, analyzes for resource usage, simulates for performance and functionality, and the compile for the CNN Accelerator IP.
Convolutional Neural Network (CNN) Compact Accelerator
Overview
Key Features
- Support convolution layer, max pooling layer, batch normalization layer and full connect layer
- Configurable bit width of weight (16 bit, 1 bit)
- Configurable bit width of activation (16/8 bit, 1 bit)
- Dynamically support 16 bit and 8 bit width of activation
- Configurable number of memory blocks for tradeoff between resource and performance
- Configurable number of convolution engines for tradeoff between resource and performance
Block Diagram
Technical Specifications
Related IPs
- Accelerator for Convolutional Neural Networks
- PowerVR Neural Network Accelerator
- DPU for Convolutional Neural Network
- PowerVR Neural Network Accelerator - cost-sensitive solution for low power and smallest area
- PowerVR Neural Network Accelerator - perfect choice for cost-sensitive devices
- PowerVR Neural Network Accelerator - The ideal choice for mid-range requirements