Efficiently Packing Neural Network AI Model for the Edge
Packing applications into constrained on-chip memory is a familiar problem in embedded design, and is now equally important in compacting neural network AI models into a constrained storage. In some ways this problem is even more challenging than for conventional software because working memory in neural network-based systems is all “inner loop”, where demand to page out to DDR memory could kill performance. Equally bad, repetitive DDR accesses during inferencing will blow typical low power budgets for edge devices. A larger on-chip memory is one way to resolve the problem but that adds to product cost. The best option where possible is to pack the model as efficiently as possible into available memory.
When compiling a neural network AI model to run on an edge device there are well known quantization techniques to reduce size: converting floating point data and weight values to fixed point, then shrinking further to INT8 or smaller values. Imagine if you could go further. In this article I want to introduce a couple of graph optimization techniques which will allow you to fit a wider range of quantized models to say a 2MB L2 memory where these would not have fit after quantization alone.
To read the full article, click here
Related Semiconductor IP
- Rad-Hard GPIO, ODIO & LVDS in SkyWater 90nm
- 1.22V/1uA Reference voltage and current source
- 1.2V SLVS Transceiver in UMC 110nm
- Neuromorphic Processor IP
- Lossless & Lossy Frame Compression IP
Related Blogs
- Real-Time Intelligence for Physical AI at the Edge
- Neural Network Model quantization on mobile
- Custom Compute for Edge AI: Accelerating innovation with Lund University and Codasip University Program
- The Future of Technology: Transforming Industrial IoT with Edge AI and AR
Latest Blogs
- MIPS P8700 RISC-V Processor for Advanced Functional Safety Systems
- Boost SoC Flexibility: 4 Design Tips for Memory Subsystems with Combo DDR3/4 Interfaces
- High Bandwidth Memory Evolution from First Generation HBM to the Latest HBM4
- Keeping Pace with CXL Specification Revisions
- Silicon-proven LVTS for 2nm: a new era of accuracy and integration in thermal monitoring