Achieving Unprecedented Power Savings with Analog ML
By Tom Doyle, Aspinitiy
 EETimes (January 12, 2023)
The rise of machine learning (ML) has enabled an entirely new class of use cases and applications. Specifically, edge computing and on-edge ML have augmented traditional devices with the ability to monitor, analyze and automate daily tasks.
Despite these advances, a major challenge remains: How do you balance the high-power demands of these ML applications with the low-power requirements of standalone, battery-powered devices? For these applications, traditional digital electronics are no longer the best option. Analog computing has emerged as the obvious choice to achieve ultra-low-power ML on the edge.
With the advent of on-edge ML, the industry has seen a proliferation of smart devices that respond to stimuli in the environment. Many households today, for example, host a virtual assistant like Amazon Alexa or Google Home that listens for a keyword before performing a task. Other examples include security cameras that monitor for movement in a frame and, on the industrial side, sensors that detect anomalies in the performance of an industrial machine.
To read the full article, click here
Related Semiconductor IP
- LPDDR6/5X/5 PHY V2 - Intel 18A-P
- MIPI SoundWire I3S Peripheral IP
- LPDDR6/5X/5 Controller IP
- Post-Quantum ML-KEM IP Core
- MIPI SoundWire I3S Manager IP
Related White Papers
- Achieving Low power with Active Clock Gating for IoT in IPs
- Achieving Your Low Power Goals with Synopsys Ultra Low Leakage IO
- How to specify and integrate successfully a measurement analog front-end including its power computation engine in an energy metering IC
- Sequential clock gating maximizes power savings at IP level
Latest White Papers
- Attack on a PUF-based Secure Binary Neural Network
- BBOPlace-Bench: Benchmarking Black-Box Optimization for Chip Placement
- FD-SOI: A Cyber-Resilient Substrate Against Laser Fault Injection—The Future Platform for Secure Automotive Electronics
- In-DRAM True Random Number Generation Using Simultaneous Multiple-Row Activation: An Experimental Study of Real DRAM Chips
- SPAD: Specialized Prefill and Decode Hardware for Disaggregated LLM Inference