Configurable AI inference processor IP, which can optimize the performance and size and process all data such as images, videos, and sounds on the edge side where real-time property, safety, privacy protection etc. are required.
AI inference processor IP
Overview
Key Features
- High precision
- DV700 Series whose computing unit supports FP16-precision floating point arithmetic as standard can be used without re-training AI models trained on PCs and cloud servers. Maintaining high inference precision, it is an ideal AI processor IP for AI systems that require high reliability such as autonomous driving and robotics.
- Compatible with various DNN models
- The DV700 series have the optimal hardware configuration for deep learning inference processing and can perform inference processing using various DNN models such as object detection, semantic segmentation, skeleton estimation, and distance estimation.
- Examples of compatible models: MobileNet, Yolo v3, SegNet, PoseNet
- Provide development environment (SDK / Tool) that facilitates AI application development
- The DV700 series provide development environment (SDK / Tool) accompanying the IP core. The development environment (SDK / Tool) supports the standard AI development framework (Caffe、Keras、TensorFlow), customers can easily perform AI inference processing with the DV700 series by preparing a model that supports the AI development framework.
- * Please refer to GitHub for details of development environment (SDK / Tool).
Benefits
- Up to 1kMAC (2TOPS @1GHz)
- Replaced Processor with optimized controller
- High bandwithd on-chip-RAM (512KB 〜 4MB)
- 8bit weight compression
- Framework
- Caffe 1.x、Keras 2.x
- TensorFlow 1.15
- ONNX format support
Block Diagram
Applications
- Automotive, Surveillance Camera, Drone, Wearable, Medical, Robot, Smartphones, Tablets, TV, Gaming, Multifunction Printers, Digital Cameras and More
Deliverables
- Synthesisable RTL
- SDK/Tools
- OS Support: Linux, RTOS
Technical Specifications
Foundry, Node
28nm,22nm,20nm,12nm,7nm
Maturity
Silicon Proven IP
Availability
NOW
Related IPs
- High-performance 32-bit multi-core processor with AI acceleration engine
- ML Inference Processor with Balanced Efficiency and Performance
- High-Efficiency, Low-Area ML Inference Processor
- High-performance 64-bit RISC-V architecture multi-core processor with AI vector acceleration engine
- Highly Scalable and Efficient Second-Generation ML Inference Processor
- Small-size ISP (Image Signal Processing) IP ideal for AI camera systems.