Neural Network Processor IP
Overview
VIP9000 Series supports all popular deep learning frameworks (TensorFlow, Pytorch, TensorFlow Lite, Caffe, Caffe2, DarkNet, ONNX, NNEF, Keras, etc.) as well as programming APIs like OpenCL and OpenVX. Neural network optimization techniques such as quantization, pruning, and model compression are also supported natively with VIP9000 architecture. AI applications can be easily port to VIP9000 platforms through offline conversion by Vivante ACUITYTM SDK, or through run-time interpretation with Android NN, NN API, or ARM NN.
Key Features
- TOPS (INT8) @1G: 3~4.5
- GFLOPS (32-bit)@1G:64
- GFLOPS (16-bit)@1G:256
- GOPS (32-bit)@1G:64
- GOPS (16-bit)@1G:256
- GOPS (8-bit) @1G:512
Technical Specifications
Foundry, Node
All
Maturity
Silicon Integration
Availability
Now
Related IPs
- Neural network processor designed for edge devices
- AI inference processor IP
- Power efficient, high-performance neural network hardware IP for automotive embedded solutions
- High speed NoC (Network On-Chip) Interconnect IP
- Convolutional Neural Network (CNN) Compact Accelerator
- PowerVR Neural Network Accelerator