Menta eFPGA IP for Edge AI

By Nassim Abderrahmane, AI Product Manager, Menta SAS

Abstract

As the demand for intelligent and autonomous applications surges, the necessity for highly efficient and adaptable edge devices capable of executing advanced Artificial Intelligence (AI) applications, such as Artificial Neural Networks (ANNs), is more evident than ever. To meet this growing demand, one effective solution is considering Menta's embedded eFPGA (eFPGA) IPs, characterized by a high degree of flexibility and efficiency when deploying highly constrained edge applications. In this context, Menta proposes a technical white paper titled 'Menta eFPGA IP for Edge AI' to demonstrate how its eFPGA Intellectual Property (IP) enhances the deployment and acceleration of edge AI applications.

The paper is divided into two main sections: the first one deals with AI algorithms, with a particular focus on Artificial Neural Networks (ANNs), which are widely acknowledged as the most commonly employed techniques in the field. The second part of the paper shifts its focus to their hardware implementation, where three distinct hardware categories are evaluated: general-purpose processors, specialized AI chips, and programmable systems. Furthermore, the paper introduces the concept of using eFPGA to leverage its cutting-edge performance capabilities and unprecedented level of flexibility, aligning efficiently with the continuously evolving field of AI algorithms. Indeed, over time, and sometimes in just a matter of weeks, significant advancements in AI occur, ranging from novel learning methods to fundamental alterations in algorithm structures, such as the integration of new layers or operators within neural networks. This rapid evolution renders traditional solutions quickly obsolete because it may generate complex modifications that would not be supported by their actual architectures. This situation prompts the adoption of increasingly flexible alternatives—an approach we propose through our eFPGA IPs.

The “Menta eFPGA IP for Edge AI” technical white paper serves as a valuable resource for AI researchers and experts, as well as specialists in system-on-chip (SoC) and ASIC engineering, who may be interested in leveraging eFPGA for efficient hardware implementation of AI algorithms. This enables them to achieve cutting-edge computational performance at reduced costs and energy consumption while still having a high degree of flexibility, which is, as just mentioned, essential for deploying edge AI applications.

Introduction to AI algorithms

Artificial neural networks represent one of the most commonly used models when dealing with AI applications. Inspired from the structure and functioning of biological neural systems, they become very popular with applications in several domains, such as object detection, image segmentation and natural language processing, due two main factors: first, the high- performance computing capabilities of today’s machines; second, a huge amount of open data available for training deep neural networks. These ANNs are composed of neurons connected to each other through weighted synapses in a certain network topology. ANNs fall into different categories, depending on the topology, data structure, type, and use-case. In this paper, we focus on feed-forward ANNs by briefly describing their functioning and structure.

AI hardware deployment

Edge AI deployment, particularly the ANN’s inference, has become increasingly common in various applications. Due to a huge amount of data generated at the Edge, it becomes very difficult to move data from sensors to Cloud for a real-time intelligent processing. One solution is bringing AI to edge devices, near the sensor(s), for processing data where they are generated. First, doing so will reduce latency because the processing is performed locally without having to move the whole data to a centralized server. Second, this will improve privacy because sensitive data are processed locally, without sending them to a centralized server, i.e., to avoid network attacks. In addition, it brings more reliability due a non- dependency to internet connectivity, which can be unreliable in certain environments. Finally, it allows for reducing the cost because, first, edge devices are not expensive, second, cloud servers’ resources may be reduced because they will receive much less data to process and third, there is no need to build up a complete infrastructure to move data around. Different hardware solutions can be used as edge devices to deploy AI algorithms. They can be separated into three categories: general purpose, specialized AI and programmable systems. In this white paper, these hardware solutions will be evaluated and compared within the context of edge AI. The Menta eFPGA IP belongs to those solutions and will be, evidently, confronted to them.

Menta eFPGA IP for AI

Menta's eFPGA is a programmable logic IP that can be integrated into System-on-Chip (SoC) devices alongside other IPs and custom logic. This integration brings a hardware programming functionality to the SoC, where the integrated eFPGA IP may be reprogrammed at any time during its lifetime. Menta eFPGA IP can perform a variety of functions such as digital signal processing, image processing and neural network acceleration. The benefits of using the Menta eFPGA IP for AI applications can be categorized in:

  • Flexibility: this is one of the main advantages of Menta eFPGA when dealing with edge AI, the IP can be reconfigured at any time to update the deployed AI application for any change after manufacturing. For example, modifying the ANN’s topology, updating its parameters, supporting new layers, etc.
  • Power efficiency: Menta eFPGA IP offers improved power efficiency as it can be optimized to perform specific tasks. This can result in lower power consumption compared to running the task on a general-purpose processor. This is crucial for edge AI applications, as power consumption can be a real and critical constraint.
  • Computing performance and latency: Menta eFPGA IP may be used to accelerate a specific function of an application at a hardware level, such as an optimized execution of a specific computing-intensive layer or operator of an ANN. This can result in a faster and a more accurate processing of data, and thus improving the overall computing and latency performance of the edge AI application.

Conclusion

In the context of edge AI, the Menta eFPGA IP represents a very promising solution for accelerating and deploying artificial neural networks. Indeed, integrating an eFPGA IP into an SoC will improve its performance and power consumption in the inference of neural networks by accelerating the most computationally demanding parts of such algorithms, such as matrix multiplication accumulation operations. In addition, the flexibility of our eFPGA IP brings an important advantage in the context of deploying AI as it is able to be reconfigured as many times as needed. Indeed, this will allow us to update the deployed AI application for any changes such as modifying the ANN topology, updating its parameters due to better training, supporting new layers or features, etc. This is particularly valuable for edge AI applications, as data and requirements at the edge can change frequently and the AI domain is constantly evolving. The use cases presented demonstrate the capabilities of Menta's eFPGA IP in deploying edge AI applications, offering benefits of high performance, low power consumption and flexibility.

To access the full version please contact us or schedule a meeting info@menta-efpga.com

×
Semiconductor IP