Analog Compute is Key to The Next Era of AI Innovation
By Tim Vehling, Mythic
EETimes (January 5, 2022)
As AI applications become more popular in a growing number of industries, the need for more compute resources, more model storage capacity and, at the same time, lower power consumption is becoming increasingly important. Today’s digital processors used for AI applications struggle to deliver these challenging requirements, especially for large machine learning models running at the edge. Analog compute offers an innovative solution, enabling companies to get more performance at lower power consumption in a small form factor that’s also cost efficient.
The computational speeds and power efficiency of analog compared to digital have been promising for a long time. Historically, there has been a number of hurdles to developing analog systems, including the size and cost of analog processors. Recent approaches have shown that pairing analog compute with non-volatile memory (NVM) like flash memory – a combination called analog compute in-memory (CIM) – can eliminate these hurdles.
Unlike digital computing systems that rely on high-throughput DRAM that consumes too much power, analog CIM systems can take advantage of the incredible density of flash memory for data storage and computing. This eliminates the high power consumption that comes with accessing and maintaining data in DRAM in a digital computing system. With the analog CIM approach, processors can perform arithmetic operations inside NVM cells by manipulating and combining small electrical currents across the entire memory bank in a fast and low-power manner.
To read the full article, click here
Related Semiconductor IP
- Xtal Oscillator on TSMC CLN7FF
- Wide Range Programmable Integer PLL on UMC L65LL
- Wide Range Programmable Integer PLL on UMC L130EHS
- Wide Range Programmable Integer PLL on TSMC CLN90G-GT-LP
- Wide Range Programmable Integer PLL on TSMC CLN80GC
Related News
- Arm and NVIDIA: Fueling Innovation for the Next Era of Compute
- Graphcore joins SoftBank Group to build next generation of AI compute
- DMP Released Next-Generation AI Accelerator IP “ZIA A3000 V2” – Industry-leading PPA efficiency to propel the future of edge AI –
- JEDEC and Open Compute Project Foundation Pave the Way for a New Era of Chiplet Innovation
Latest News
- RaiderChip NPU for LLM at the Edge supports DeepSeek-R1 reasoning models
- The world’s first open source security chip hits production with Google
- ZeroPoint Technologies Unveils Groundbreaking Compression Solution to Increase Foundational Model Addressable Memory by 50%
- Breker RISC-V SystemVIP Deployed across 15 Commercial RISC-V Projects for Advanced Core and SoC Verification
- AheadComputing Raises $21.5M Seed Round and Introduces Breakthrough Microprocessor Architecture Designed for Next Era of General-Purpose Computing