Making the most of Arm NN for GPU inference: FP16 and FastMath
Most operations in deep learning involve massive amounts of data, but simple control logic. As a parallel processor, GPUs are very suitable for this type of task. Current high-end mobile GPUs can provide substantial throughput thanks to having hundreds of Arithmetic Logic Units (ALUs). In fact, GPUs are built with a single purpose – parallel data processing, initially for 3D graphics and later for more general parallel computing.
Additionally, GPUs are energy-efficient processors. Nowadays, the number of operations per watt (TOPs/W) is used to evaluate the energy efficiency of mobile processors and embedded devices. GPUs have higher TOPs/W, due to the relatively simple control unit and lower working frequency.
One of the biggest challenges that faces mobile inference (and training) on deep neural networks (DNNs) is memory. Memory in neural networks (NN) is required to store input data, weight parameters and activations, as an input propagates through the network. As an example, the 50-layer ResNet network has ~26 million weight parameters and computes ~16 million activations in the forward pass. Adding on-chip memory is one way of solving the memory bottleneck problem by allowing higher memory bandwidth. However, on-chip memory is an expensive feature.
To read the full article, click here
Related Semiconductor IP
- LPDDR6/5X/5 PHY V2 - Intel 18A-P
 - MIPI SoundWire I3S Peripheral IP
 - P1619 / 802.1ae (MACSec) GCM/XTS/CBC-AES Core
 - LPDDR6/5X/5 Controller IP
 - Post-Quantum ML-KEM IP Core
 
Related Blogs
- Imagination and Renesas Redefine the Role of the GPU in Next-Generation Vehicles
 - Deep Robotics and Arm Power the Future of Autonomous Mobility
 - UEC-LLR: The Future of Loss Recovery in Ethernet for AI and HPC
 - New Armv9 CPUs for Accelerating AI on Mobile and Beyond
 
Latest Blogs
- ML-DSA explained: Quantum-Safe digital Signatures for secure embedded Systems
 - Efficiency Defines The Future Of Data Movement
 - Why Standard-Cell Architecture Matters for Adaptable ASIC Designs
 - ML-KEM explained: Quantum-safe Key Exchange for secure embedded Hardware
 - Rivos Collaborates to Complete Secure Provisioning of Integrated OpenTitan Root of Trust During SoC Production