Making the most of Arm NN for GPU inference: FP16 and FastMath
Most operations in deep learning involve massive amounts of data, but simple control logic. As a parallel processor, GPUs are very suitable for this type of task. Current high-end mobile GPUs can provide substantial throughput thanks to having hundreds of Arithmetic Logic Units (ALUs). In fact, GPUs are built with a single purpose – parallel data processing, initially for 3D graphics and later for more general parallel computing.
Additionally, GPUs are energy-efficient processors. Nowadays, the number of operations per watt (TOPs/W) is used to evaluate the energy efficiency of mobile processors and embedded devices. GPUs have higher TOPs/W, due to the relatively simple control unit and lower working frequency.
One of the biggest challenges that faces mobile inference (and training) on deep neural networks (DNNs) is memory. Memory in neural networks (NN) is required to store input data, weight parameters and activations, as an input propagates through the network. As an example, the 50-layer ResNet network has ~26 million weight parameters and computes ~16 million activations in the forward pass. Adding on-chip memory is one way of solving the memory bottleneck problem by allowing higher memory bandwidth. However, on-chip memory is an expensive feature.
To read the full article, click here
Related Semiconductor IP
- SPMI Host and Device IP
- Parallel Processing Unit
- High Bandwidth Memory 3 (HBM3/3E) IP optimized for Samsung SF4X
- ULL PCIe DMA Controller
- Bluetooth Dual Mode v6.0 Protocol Software Stack and Profiles IP
Related Blogs
- Imagination and Renesas Redefine the Role of the GPU in Next-Generation Vehicles
- Making the most of the 60th DAC
- Develop Software for the Cortex-M Security Extensions Using Arm DS and Arm GNU Toolchain
- Pace of Innovation for Custom Silicon on Arm Continues with AWS Graviton4