Overview
Single-core neural network accelerator offering from 0.5 to 4 TOPS Optimized for machine learning inference applications
The Cadence® Tensilica® NNA 110 accelerator incorporates a custom hardware accelerator engine (NNE) coupled with a Tensilica Vision P6 or P1 DSP. The specialized compute block inside the NNA 110 hardware leverages features like random sparsity, tensor compression / decompression to provide an overall best in-class embedded AI accelerator solution.
A single-core NNA 110 accelerator supports 256 to 2K MAC 8x8-bit MAC computations and has various user-defined configurable options. The NNA 110 accelerator can run all neural network layers, including but not limited to convolution, fully connected, LSTM, LRN, and pooling operations. The accompanying Tensilica DSP in NNA 110 can run any operation that is not native to the accelerator, thereby making NNA 110 a highly flexible and robust future-proof offering. NNA 110 solution deliverables comprises of turnkey soft RTL IP, software compiler toolchain, and an accurate simulator for benchmarking.
Provider
Cadence Design Systems, Inc.
HQ:
USA
If you want to achieve silicon success, let Cadence help you choose the right IP solution and capture its full value in your SoC design. Cadence® IP solutions offer the combined advantages of a high-quality portfolio, an open platform, a modern IP factory approach to quality, and a strong ecosystem.
Now you can tackle IP-to-SoC development in a system context, focus your internal effort on differentiation, and leverage multi-function cores to do more, faster.
The Cadence IP Portfolio includes silicon-proven Tensilica® IP cores, analog PHY interfaces, standards-based IP cores, verification IP cores, and other solutions as well as customization services for current and emerging industry standards. The Cadence IP Factory provides you with an automated approach to the customization, delivery, and verification of SoC IP. As a result, you can spend more time on differentiation, with the assurance that you'll meet your performance, power, and area requirements.
Choosing Cadence IP enables you to design with confidence because you have more freedom to innovate your SoCs with less risk and faster time to market.
Learn more about NPU IP core
Is your NPU DOOMed? Quadric's Chimera GPNPU runs every AI model — and a complete DOOM engine. Find out why Quadric is different.
At Quadric, we have long argued that heterogeneous NPU designs — those that stitch together multiple specialized fixed-function engines — carry an unavoidable hidden cost: data has to move. A lot. And data movement burns power, adds latency, and creates silicon-area overhead that scales with every new generation of AI models. Now, Intel has made that case for us.
The IP industry is no stranger to boom and bust cycles, and it looks to be at the crest of another wave.
AI is evolving faster than the chips designed to run it. Models like large language transformers and generative networks are shifting rapidly–while silicon development cycles remain long and rigid. Traditional NPUs, built around proprietary instruction sets and opaque compilers, simply can’t keep up.
Just bolting a matrix accelerator onto existing processor IP leads to long-term challenges.