ARM, IBM team on low power analog AI chip
By Nick Flaherty, eeNews Europe (September 9, 2022)
Researchers at ARM and IBM have developed a 14nm analog compute in memory chip for low power always on machine learning.
These always-on perception tasks in IoT applications, dubbed TinyML, require very high energy efficiency. Analog compute-in-memory (CiM) using non-volatile memory (NVM) promises high energy efficiency and self-contained on-chip model storage.
However, analog CiM introduces new practical challenges, including conductance drift, read/write noise, fixed analog-to-digital (ADC) converter gain, etc. These must be addressed to achieve models that can be deployed on analog CiM with acceptable accuracy loss.
Researchers from ARM and IBM Research Zurich looked at the TinyML models for the popular always-on tasks of keyword spotting (KWS) and visual wake words (VWW). The model architectures are specifically designed for analog CiM, and detail a comprehensive training methodology, to retain accuracy in the face of analog issues and low-precision data converters at inference time.
Related Semiconductor IP
- JESD204D Transmitter and Receiver IP
- 100G UDP IP Stack
- Frequency Synthesizer
- Temperature Sensor IP
- LVDS Driver/Buffer
Related News
- Nordic Semiconductor and Arm reaffirm partnership with licensing agreement for latest low power processor designs, software platforms, and security IP
- Blumind Harnesses Analog for Ultra Low Power Intelligence
- Fifth generation ARM Cortex-X for 3nm AI chip designs
- Lattice mVision Solution Stack Enables 4K Video Processing at Low Power for Embedded Vision Applications
Latest News
- HPC customer engages Sondrel for high end chip design
- PCI-SIG’s Al Yanes on PCIe 7.0, HPC, and the Future of Interconnects
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- Cadence Unveils Arm-Based System Chiplet
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers