Highly scalable inference NPU IP for next-gen AI applications
The inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and mo…
Overview
The state-of-the-art inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and more. ENLIGHT Pro is meticulously engineered to deliver enhanced flexibility, scalability, and configurability, enhancing overall efficiency in a compact footprint. ENLIGHT Pro supports the transformer model, a key requirement in modern AI applications, particularly Large Language Models (LLMs). LLMs are instrumental in tasks such as text recognition and generation, trained using deep learning techniques on extensive datasets. The automotive industry is expected to adopt LLMs to offer instant, personalized, and accurate responses to customers' inquiries.
ENLIGHT Pro sets itself apart by achieving 4096 MACs/cycle for an 8-bit integer, quadrupling the speed of ENLIGHT Classic, and operating at up to 1.0GHz on a 14nm process node. It offers performance ranging from 8 TOPS (Terra Operations per Second) to hundreds of TOPS, optimized for flexibility and scalability. ENLIGHT Pro supports tensor shape transformation operations, including slicing, splitting, and transposing, and supports a wide variety of data types --- integer 8, 16, 32, and floating point (FP) 16 and 32 --- to ensure flexibility across computational tasks. The vector processor achieves a 16 floating point 16 MACs/cycle, and includes a 32x2 KB vector register file (VRF). Additionally, single-core, dual-core, and quad-core with scalable task mappings such as multiple models, data parallelism, and tensor parallelism are available.
ENLIGHT Pro incorporates a RISC-V CPU vector extension with custom instructions. This includes support for Softmax and local storage access, enhancing its overall flexibility. It comes with a software toolkit that supports widely used network formats like ONNX (PyTorch), TFLite (TensorFlow), and CFG (Darknet). ENLIGHT SDK streamlines the conversion of floating-point networks to integer networks through a network compiler and generates NPU commands and network parameters via a network compiler.
Key features
- Mixed-Precision Computation (INT8, INT16, FP16): Achieving accuracy while preserving power, performance, and area (PPA) efficiencies
- Deep Neural Network (DNN)-optimized Vector Engine: Custom instructions for Softmax and local storage access & enhanced adaptability for future DNNs
- Scale-out w/ Multi-core: Greater performance by parallel processing of DNN layers
- Modern DNN Algorithm Support: Transformer architecture, depth-wise convolution, feature pyramid network (FPN), etc.
- High-level Inter-layer Optimization: Optimized layer grouping and scheduling to minimize DRAM traffic from intermediate data
- DNN-layers Parallelization: Effective multi-core utilization for elevated performance & optimized core-to-core data transfer
- Automated Quantization Flow: Minimization of quantization loss through mixed-precision computation
Block Diagram
Benefits
- NN Converter
- Converts a network file into internal network format (.enlight)
- Supports ONNX (PyTorch), TF-Lite, and CFG (Darknet)
- NN Quantizer
- Generates quantized network: float to 4-/8-bit integer
- Supports per-layer quantization of activation and per-channel quantization of weight
- NN Simulator
- Evaluates full precision network and quantized network
- Estimates accuracy loss due to quantization
- NN Compiler
- Generates NPU handling code for target architecture and network
Applications
- Automotive
- Cameras
- Person, vehicle, bike, traffic sign detection
- Parking lot vehicle location detection & recognition
- License plate detection & recognition
- Detection, tracking, and action recognition for surveillance
What’s Included?
- RTL design for synthesis
- SW toolkits and device driver
- User guide
- Integration guide
Files
Note: some files may require an NDA depending on provider policy.
Specifications
Identity
Provider
Learn more about NPU IP core
Heterogeneous NPU Data Movement Tax: Intel's Own Slides Tell the Story
The Upcoming NPU Shakeout
One Instruction Stream, Infinite Possibilities: The Cervell™ Approach to Reinventing the NPU
Legacy IP Providers Struggle to Solve the NPU Dilemna
Can You Rely Upon your NPU Vendor to be Your Customers' Data Science Team?
Frequently asked questions about NPU IP cores
What is Highly scalable inference NPU IP for next-gen AI applications?
Highly scalable inference NPU IP for next-gen AI applications is a NPU IP core from OPENEDGES Technology, Inc. listed on Semi IP Hub.
How should engineers evaluate this NPU?
Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this NPU IP.
Can this semiconductor IP be compared with similar products?
Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.