Machines can see, hear and analyze thanks to embedded neural networks

Youval Nachum, CEVA
embedded.com (February 21, 2018)

The potential applications around artificial intelligence (AI) continue to grow on a daily basis. As the power of different neural network (NN) architectures are tested, tuned and refined to tackle different problems, diverse methods of optimally analyzing data using AI are found. Much of today’s AI applications such as Google Translate and Amazon Alexa’s speech recognition and vision recognition systems leverage the power of the cloud. By relying upon always-on Internet connections, high bandwidth links and web services, the power of AI can be integrated into Internet of Things (IoT) products and smartphone apps. To date, most attention is focused on vision-based AI, partly because it is easy to visualize in news reports and videos, and partly because it is such a human-like activity.

For image recognition, a 2D image is analysed – a square group of pixels at a time – with successive layers of the NN recognizing ever larger features. At the beginning, edges of high difference in contrast will be detected. In a face, this will occur around features such as the eyes, nose, and mouth. As the detection process progresses deeper into the network, whole facial features are detected. In the final phase, the combination of features and their position will tend toward a specific face in the available dataset being identified as a likely match.

Click here to read more ...

×
Semiconductor IP