Using edge AI processors to boost embedded AI performance

By Rehan Hameed, Kinara
embedded.com (November 24, 2022)

A look at Kinara’s accelerator and NXP processors which combine to deliver edge AI performance capable of delivering smart camera designs

The arrival of artificial intelligence (AI) in embedded computing has led to a proliferation of potential solutions that aim to deliver the high performance required to perform neural-network inferencing on streaming video at high rates. Though many benchmarks such as the ImageNet challenge work at comparatively low resolutions and can therefore be handled by many embedded-AI solutions, real-world applications in retail, medicine, security, and industrial control call for the ability to handle video frames and images at resolutions up to 4kp60 and beyond.

Scalability is vital and not always an option with system-on-chip (SoC) platforms that provide a fixed combination of host processor and neural accelerator. Though they often provide a means of evaluating the performance of different forms of neural network during prototyping, such all-in-one implementations lack the granularity and scalability that real-world systems often need. In this case, industrial-grade AI applications benefit from a more balanced architecture where a combination of heterogeneous processors (e.g., CPUs, GPUs) and accelerators cooperate in an integrated pipeline to not just perform inferencing on raw video frames but take advantage of pre- and post-processing to improve overall results or handle format conversion to be able to deal with multiple cameras and sensor types.

Typical deployment scenarios lie in smart cameras and edge-AI appliances. For the former, the requirement is for vision processing and support for neural-network inferencing to be integrated into the main camera board. The camera may need to perform tasks such as counting the number of people in a room and be able to avoid counting them twice if subjects move in and out of view. Not only must the smart camera be able to recognize people but also be able to re-identify them based on data the camera has already processed so that it does not double-count. This calls for a flexible image-processing and inferencing pipeline where the application can handle the basic object recognition as well as sophisticated inferencing-based tasks such as re-identification.

Click here to read more ...

×
Semiconductor IP