ARM, IBM team on low power analog AI chip

By Nick Flaherty, eeNews Europe (September 9, 2022)

Researchers at ARM and IBM have developed a 14nm analog compute in memory chip for low power always on machine learning.

These always-on perception tasks in IoT applications, dubbed TinyML, require very high energy efficiency. Analog compute-in-memory (CiM) using non-volatile memory (NVM) promises high energy efficiency and self-contained on-chip model storage.

However, analog CiM introduces new practical challenges, including conductance drift, read/write noise, fixed analog-to-digital (ADC) converter gain, etc. These must be addressed to achieve models that can be deployed on analog CiM with acceptable accuracy loss.

Researchers from ARM and IBM Research Zurich looked at the TinyML models for the popular always-on tasks of keyword spotting (KWS) and visual wake words (VWW). The model architectures are specifically designed for analog CiM, and detail a comprehensive training methodology, to retain accuracy in the face of analog issues and low-precision data converters at inference time.

Click here to read more ...

×
Semiconductor IP