Efficiently Packing Neural Network AI Model for the Edge
Packing applications into constrained on-chip memory is a familiar problem in embedded design, and is now equally important in compacting neural network AI models into a constrained storage. In some ways this problem is even more challenging than for conventional software because working memory in neural network-based systems is all “inner loop”, where demand to page out to DDR memory could kill performance. Equally bad, repetitive DDR accesses during inferencing will blow typical low power budgets for edge devices. A larger on-chip memory is one way to resolve the problem but that adds to product cost. The best option where possible is to pack the model as efficiently as possible into available memory.
When compiling a neural network AI model to run on an edge device there are well known quantization techniques to reduce size: converting floating point data and weight values to fixed point, then shrinking further to INT8 or smaller values. Imagine if you could go further. In this article I want to introduce a couple of graph optimization techniques which will allow you to fit a wider range of quantized models to say a 2MB L2 memory where these would not have fit after quantization alone.
To read the full article, click here
Related Semiconductor IP
- HBM4 PHY IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- MIPI D-PHY and FPD-Link (LVDS) Combinational Transmitter for TSMC 22nm ULP
- HBM4 Controller IP
- IPSEC AES-256-GCM (Standalone IPsec)
Related Blogs
- Real-Time Intelligence for Physical AI at the Edge
- A New Era for Edge AI: Codasip’s Custom Vector Processor Drives the SYCLOPS Mission
- Neural Network Model quantization on mobile
- Introducing the MIPS Atlas Portfolio for Physical AI
Latest Blogs
- ReRAM in Automotive SoCs: When Every Nanosecond Counts
- AndeSentry – Andes’ Security Platform
- Formally verifying AVX2 rejection sampling for ML-KEM
- Integrating PQC into StrongSwan: ML-KEM integration for IPsec/IKEv2
- Breaking the Bandwidth Barrier: Enabling Celestial AI’s Photonic Fabric™ with Custom ESD IP on TSMC’s 5nm Platform