Vendor: Nuclei System Technology Category: NPU

Neural Network Accelerator

The Neural-Network Accelerators (NACC) improves the inference performance of neural networks.

Overview

The Neural-Network Accelerators (NACC) improves the inference performance of neural networks. The NACC data type is INT8, and supports im2col, convolution, depthwise convolution, average pool, max pool, fully connected, activation and matrix multiplication acceleration. Such as can accelerate ResNet, MobileNet, Alexnet, VGG16, YOLO and other structures of the network. This IP can be used as a accelerator for functions in NMSIS, the open-source TensorFlow Lite.

Key features

  • Support 8-bit & 16-bit fixed-point data formats
  • Support bus transmission data width 128bit or 512bit
  • Supports computing power configurable from 64 GOPS to 4 TOPS
  • Support local single-port SRAM cache, and the bus can independently access the SRAM function
  • Support accelerating of convolution, rgb convolution, depthwise convolution, pooling, fully connected, elementwise, activation function in NMSIS
  • Support accelerating of matrix multiplication function in NMSIS
  • Support accelerating of im2col function in NMSIS
  • Support IFM split width or height,kernel split
  • Support Multicore mode
  • It has shared SRAM to store IFM ,Temp data and OFM

Files

Note: some files may require an NDA depending on provider policy.

Specifications

Identity

Part Number
uNPU
Vendor
Nuclei System Technology

Provider

Nuclei System Technology
HQ: China
Nuclei System Technology is a top RISC-V processor IP vendor based in China . Nuclei is dedicating to develop configurable low-power and high-performance 32/64-bit RISC-V processors and related solutions for AIoT applications. Nuclei has developed several series products to address the full range of embedded system applications, including N100, N200, N300, N/NX/UX 600, with extensible and security features. We have collaborated with many well-known companies for silicon-proven solutions, e.g. the first RISC-V general MCU - GDVF103 with GigaDevice.

Learn more about NPU IP core

Heterogeneous NPU Data Movement Tax: Intel's Own Slides Tell the Story

At Quadric, we have long argued that heterogeneous NPU designs — those that stitch together multiple specialized fixed-function engines — carry an unavoidable hidden cost: data has to move. A lot. And data movement burns power, adds latency, and creates silicon-area overhead that scales with every new generation of AI models. Now, Intel has made that case for us.

The Upcoming NPU Shakeout

The IP industry is no stranger to boom and bust cycles, and it looks to be at the crest of another wave.

Frequently asked questions about NPU IP cores

What is Neural Network Accelerator?

Neural Network Accelerator is a NPU IP core from Nuclei System Technology listed on Semi IP Hub.

How should engineers evaluate this NPU?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this NPU IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP