Vendor: Xilinx, Inc. Category: NPU

DPU for Convolutional Neural Network

The Xilinx® Deep Learning Processor Unit (DPU) is a programmable engine dedicated for convolutional neural network.

Overview

The Xilinx® Deep Learning Processor Unit (DPU) is a programmable engine dedicated for convolutional neural network. The unit contains register configure module, data controller module, and convolution computing module. There is a specialized instruction set for DPU, which enables DPU to work efficiently for many convolutional neural networks. The deployed convolutional neural network in DPU includes VGG, ResNet, GoogLeNet, YOLO, SSD, MobileNet, FPN, etc.

The DPU IP can be integrated as a block in the programmable logic (PL) of the selected Zynq®-7000 SoC and Zynq UltraScale™+ MPSoC devices with direct connections to the processing system (PS). To use DPU, you should prepare the instructions and input image data in the specific memory address that DPU can access. The DPU operation also requires the application processing unit (APU) to service interrupts to coordinate data transfer.

Key features

  • One slave AXI interface for accessing configuration and status registers
  • One master interface for accessing instructions
  • Supports configurable AXI master interface with 64 or 128 bits for accessing data
  • Supports individual configuration of each channel
  • Supports optional interrupt request generation
  • Some highlights of DPU functionality include:
    • Configurable hardware architecture includes: B512, B800, B1024, B1152, B1600, B2304, B3136, and B4096
    • Configurable core number up to three
    • Convolution and deconvolution
    • Max pooling
    • ReLu and Leaky ReLu
    • Concat
    • Elementwise
    • Dilation
    • Reorg
    • Fully connected layer
    • Batch Normalization
    • Split

Specifications

Identity

Part Number
DPU IP
Vendor
Xilinx, Inc.
Type
Silicon IP

Files

Note: some files may require an NDA depending on provider policy.

Provider

Xilinx, Inc.
HQ: USA

Learn more about NPU IP core

Heterogeneous NPU Data Movement Tax: Intel's Own Slides Tell the Story

At Quadric, we have long argued that heterogeneous NPU designs — those that stitch together multiple specialized fixed-function engines — carry an unavoidable hidden cost: data has to move. A lot. And data movement burns power, adds latency, and creates silicon-area overhead that scales with every new generation of AI models. Now, Intel has made that case for us.

The Upcoming NPU Shakeout

The IP industry is no stranger to boom and bust cycles, and it looks to be at the crest of another wave.

Frequently asked questions about NPU IP cores

What is DPU for Convolutional Neural Network?

DPU for Convolutional Neural Network is a NPU IP core from Xilinx, Inc. listed on Semi IP Hub.

How should engineers evaluate this NPU?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this NPU IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP