Machines can see, hear and analyze thanks to embedded neural networks
Youval Nachum, CEVA
embedded.com (February 21, 2018)
The potential applications around artificial intelligence (AI) continue to grow on a daily basis. As the power of different neural network (NN) architectures are tested, tuned and refined to tackle different problems, diverse methods of optimally analyzing data using AI are found. Much of today’s AI applications such as Google Translate and Amazon Alexa’s speech recognition and vision recognition systems leverage the power of the cloud. By relying upon always-on Internet connections, high bandwidth links and web services, the power of AI can be integrated into Internet of Things (IoT) products and smartphone apps. To date, most attention is focused on vision-based AI, partly because it is easy to visualize in news reports and videos, and partly because it is such a human-like activity.
For image recognition, a 2D image is analysed – a square group of pixels at a time – with successive layers of the NN recognizing ever larger features. At the beginning, edges of high difference in contrast will be detected. In a face, this will occur around features such as the eyes, nose, and mouth. As the detection process progresses deeper into the network, whole facial features are detected. In the final phase, the combination of features and their position will tend toward a specific face in the available dataset being identified as a likely match.
Related Semiconductor IP
- ARC NPX Neural Processing Unit (NPU) IP supports the latest, most complex neural network models and addresses demands for real-time compute with ultra-low power consumption for AI applications
- Compact neural network engine offering scalable performance (32, 64, or 128 MACs) at very low energy footprints
- Neural Network Processor IP
- Neural Network Processor IP
- Neural Network Processor IP
Related White Papers
- How Low Can You Go? Pushing the Limits of Transistors - Deep Low Voltage Enablement of Embedded Memories and Logic Libraries to Achieve Extreme Low Power
- Pyramid Vector Quantization and Bit Level Sparsity in Weights for Efficient Neural Networks Inference
- The realities of developing embedded neural networks
- Neural Networks Can Help Keep Connected Vehicles Secure
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference