Take your neural networks to the next level with Arm's Machine Learning Inference Advisor

Arm is forging a path to the future with solutions designed to support the rapid development of AI. One challenge is to make the emerging technology available to the community. In this blog, we present the Arm ML Inference Advisor (Arm MLIA) and show you how it is used to improve model performance on Arm IP. We also explain some of the work leading up to it, and why it matters.

The unknown hardware side of Machine Learning

Designing networks is a challenge, ask anyone who has done it. You need to understand a number of complex concepts to get it right. In the ML space, many are familiar with the high-level API's such as TensorFlow and PyTorch. These powerful tools help us set up a pipeline for our use-cases: training, tweaking and generating the runtime. When the model is compiled for deployment, the assumption is that that's the end of the story. You did the work to tweak the model parameters during training and now your ML-pipeline is optimized. What happens when you deploy the model on a hardware target? Can we impact the performance on a processor level? Today we are here to learn about the rest of that story.

Click here to read more ...

×
Semiconductor IP