Efficient inference on IMG Series4 NNAs
Research into neural network architectures generally prioritises accuracy over efficiency. Certain papers have investigated efficiency (Tan and Le 2020) (Sandler, et al. 2018), but quite often this is with CPU- or GPU-based rather than accelerator-based inference in mind.
In this original work from Imagination’s AI Research team, many well-known classification networks trained on ImageNet are evaluated. We are not interested in accuracy or cost in their own right, but rather in efficiency, which is a combination of the two. In other words, we want networks that get high accuracy on our IMG Series4 NNAs at as low a cost as possible. We cover:
- identifying ImageNet classification network architectures that give the best accuracy/performance trade-offs on our Series4 NNAs.
- reducing cost dramatically using quantisation-aware training (QAT) and low-precision weights without affecting accuracy.
To read the full article, click here
Related Semiconductor IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- Parameterizable compact BCH codec
- 1G BASE-T Ethernet Verification IP
- Network-on-Chip (NoC)
- Microsecond Channel (MSC/MSC-Plus) Controller
Related Blogs
- Deep learning inference performance on the Yitian 710
- Introducing Cortex-A35: ARM's Most Efficient Application Processor
- Energy Efficient Designs with MIPI
- Automotive Cries for Earlier, More Efficient Testing
Latest Blogs
- Physical AI at the Edge: A New Chapter in Device Intelligence
- Rivian’s autonomy breakthrough built with Arm: the compute foundation for the rise of physical AI
- AV1 Image File Format Specification Gets an Upgrade with AVIF v1.2.0
- Industry’s First End-to-End eUSB2V2 Demo for Edge AI and AI PCs at CES
- Integrating Post-Quantum Cryptography (PQC) on Arty-Z7