The Expanding Markets for Edge AI Inference
By Geoff Tate, Flex Logix
EETimes (May 27, 2021)
While AI originally was targeted for data centers and in the cloud, it has been moving rapidly towards the edge of the network where it is needed to make fast and critical decisions locally and closer to the end user. Sure, training can be still done in the cloud, but in applications such as autonomous driving, it is important that the time-sensitive decision making (spotting a car or pedestrian) is done closer to the end user (the driver). After all, edge systems can make decisions on images coming in at up to 60 frames per second, enabling quick actions.
These systems are made possible through edge inference accelerators that have emerged to replace CPUs, GPUs and FPGAs at much higher throughput/$ and throughput/Watt.
The ability to do AI inferencing closer to the end user is opening up a whole new world of markets and applications. In fact, IDC just reported that the market for AI software, hardware, and services is expected to break the $500 billion mark by 2024, with a five-year compound annual growth rate (CAGR) of 17.5% and total revenues reaching an impressive $554.3 billion.
This rapid growth is likely due to the fact that AI is expanding from “just a high-end functionality” into products closer to consumers, essentially bringing AI capabilities to the masses. In addition, recent products announced have started breaking the cost barriers typically associated with AI inference, enabling designers to incorporate AI into a wider range of affordable products.
To read the full article, click here
Related Semiconductor IP
- AI Inference IP. Ultra-low power, tiny, std CMOS. ~ 100K parameter RNN
- AI inference processor IP
- Highly scalable inference NPU IP for next-gen AI applications
Related White Papers
- The benefit of non-volatile memory (NVM) for edge AI
- The Growing Importance of AI Inference and the Implications for Memory Technology
- Top 5 Reasons why CPU is the Best Processor for AI Inference
- AI Edge Inference is Totally Different to Data Center
Latest White Papers
- Reimagining AI Infrastructure: The Power of Converged Back-end Networks
- 40G UCIe IP Advantages for AI Applications
- Recent progress in spin-orbit torque magnetic random-access memory
- What is JESD204C? A quick glance at the standard
- Open-Source Design of Heterogeneous SoCs for AI Acceleration: the PULP Platform Experience