OPENEDGES, the memory system IP provider, including DDR memory controller, DDR PHY, on-chip interconnect, and NPU IP together as an integrated solution or independent IP. They are tightly combined to bring synergy for high performance and low latency. OPENEDGES released the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT, in Feb 2022. When ENLIGHT is used with other OPENEDGES' IP solutions, it reaches maximum efficiencies in power consumption, area, and DRAM optimization.
ENLIGHT, a high-performance neural network processor IP, features a highly optimized network model compiler that moves DRAM traffic from intermediate activation data by grouped layer partition and scheduling. Plus, it supports load balancing partition for multi-core NPU. With the industry's first adoption of 4-/8-bit mixed-quantization, it is easy to customize ENLIGHT at different core sizes and performance for the target market applications and achieve significant efficiencies in size, power, performance, and DRAM bandwidth.
A production-proven IP, ENLIGHT, has been licensed in a wide range of applications, including IP cameras, IoT, ADAS, and more.
4-/8-bit mixed-precision NPU IP
Overview
Key Features
- The industry's first adoption of 4-/8-bit mixed-quantization
- Higher efficiencies in PPAs (power, performance, area) and DRAM bandwidth
- DNN-optimized Vector Engine (better adaptation to future DNN changes)
- Scale-out with Multi-core (Even higher performance by parallel processing of DNN layers)
- Modern DNN algorithm support (depth-wise convolution, feature pyramid network (FPN), swish/mish activation, etc.)
- High-level inter-layer Optimization (grouped layer partitioning and scheduling for reducing DRAM traffic from intermediate data)
- DNN-layers Parallelization (efficiently utilize multi-core resources for higher performance and optimize data movement among cores)
- Aggressive Quantization (maximize the use of 4-bit computation capability)
Benefits
- Easy customization at different core sizes and performance
- NN Converter converts a network file into an internal network format and supports ONNX (PyTorch), TF-Lite, and CFG (DarkNet)
- NN Quantizer generates a quantized network and supports per-layer quantization for activation and per-channel quantization for weight
- NN Simulator evaluates full precision network and quantized network and estimates accuracy loss due to quantization
- NN Compiler generates NPU handling code for the target architecture and network
Block Diagram
Applications
- Person, vehicle, bike, traffic sign detection
- Parking lot vehicle location detection & recognition
- License plate detection & recognition
- Detection, tracking, and action recognition for surveillance, etc.
Deliverables
- RTL design for synthesis
- SW toolkits and device driver
- User guide
- Integration guide
Technical Specifications
Maturity
Production-and market-proven
Availability
Now
Related IPs
- NPU
- Complete memory system supporting any combinations of SDR SDRAM, DDR, DDR2, Mobile SDR, FCRAM, Flash, EEPROM, SRAM and NAND Flash, all in one IP core
- DDR3 Controller IP
- High-performance 2D (sprite graphics) GPU IP combining high pixel processing capacity and minimum gate count.
- FlexNoC 5 Interconnect IP
- BCH Encoder/Decoder IP Core