NPU IP for Wearable and IoT Market

Overview

The VIP9000Pico family offers low power, programmable, scalable and extendable solutions for markets that demand low power AI devices. VIP9000Pico Series’ patented Neural Network engine and Tensor Processing Fabric deliver superb neural network inference performance with industry-leading power efficiency (TOPS/W) and area efficiency (mm2/W). The VIP9000Pico’s scalable architecture enables AI for wearable and IoT market. In addition to neural network acceleration, VIP9000Pico Series are optionally equipped with Parallel Processing Units (PPUs), which provide full programmability along with conformance to OpenCL 3.0 and OpenVX 1.2.  
 

VIP9000Pico Series IP supports all popular deep learning frameworks (TensorFlow, TensorFlow Lite, PyTorch, Caffe, DarkNet, ONNX, Keras, etc.) and natively accelerates neural network models through optimization techniques such as quantization, pruning, and model compression. AI applications can easily port to VIP9000Pico platforms through offline conversion by Vivante’s ACUITY™ Tools SDK or through run-time interpretation with Android NN, NNAPI Delegate, ARMNN, or ONNX Runtime.

Key Features

  • ML inference engine for deeply embedded system
  • NN Engine
    • 48, 96, 192, or 384 MACs configurations
    • INT8 or INT16 weights and activations
    • Flexible mixed precision inference
    • Tensor processor core for low power RNN/LSTM and non-convolutional operations
  • Supports popular ML frameworks
    • Tensorflow, TF-light, TF-Lite Micro, Pytorch, ONNX, ARM NN, Caffe
    • 50+ built-in operations requiring no CPU processing
  • Support wide range of NN algorithms and flexible in layer ordering
  • Support low power interface

Block Diagram

NPU IP for Wearable and IoT Market Block Diagram

Technical Specifications

Foundry, Node
All
Maturity
Silicon Integration
Availability
Now
×
Semiconductor IP