Efficient inference on IMG Series4 NNAs
Research into neural network architectures generally prioritises accuracy over efficiency. Certain papers have investigated efficiency (Tan and Le 2020) (Sandler, et al. 2018), but quite often this is with CPU- or GPU-based rather than accelerator-based inference in mind.
In this original work from Imagination’s AI Research team, many well-known classification networks trained on ImageNet are evaluated. We are not interested in accuracy or cost in their own right, but rather in efficiency, which is a combination of the two. In other words, we want networks that get high accuracy on our IMG Series4 NNAs at as low a cost as possible. We cover:
- identifying ImageNet classification network architectures that give the best accuracy/performance trade-offs on our Series4 NNAs.
- reducing cost dramatically using quantisation-aware training (QAT) and low-precision weights without affecting accuracy.
To read the full article, click here
Related Semiconductor IP
- NFC wireless interface supporting ISO14443 A and B with EEPROM on SMIC 180nm
- DDR5 MRDIMM PHY and Controller
- RVA23, Multi-cluster, Hypervisor and Android
- HBM4E PHY and controller
- LZ4/Snappy Data Compressor
Related Blogs
- Deep learning inference performance on the Yitian 710
- Word from the Source - USB-IF on USB Type-C and Alternate Modes (Jeff Ravencraft Interview - Part 2)
- Semiconductors Future Hinges on a Single Pillar
- Word from the Source - USB-IF on USB Type-C Quality and Interoperability (Jeff Ravencraft Interview - Part 3)
Latest Blogs
- lowRISC Tackles Post-Quantum Cryptography Challenges through Research Collaborations
- How to Solve the Size, Weight, Power and Cooling Challenge in Radar & Radio Frequency Modulation Classification
- Programmable Hardware Delivers 10,000X Improvement in Verification Speed over Software for Forward Error Correction
- The Integrated Design Challenge: Developing Chip, Software, and System in Unison
- Introducing Mi-V RV32 v4.0 Soft Processor: Enhanced RISC-V Power