Vendor: Batik Semiconductor Category: NPU

AI Accelerator Specifically for CNN

Our IP inference accelerators enhance AI computations, providing outstanding performance across various applications.

Overview

Our IP inference accelerators enhance AI computations, providing outstanding performance across various applications. Whether your needs involve real-time object detection, natural language processing, image recognition, or other AI tasks, our accelerators empower quicker and more effective AI processing, elevating the competitiveness of your applications.
Parameterizeable AI Accelerator (GenCore) includes several SystemVerilog RTL kernels, such as DataFetcher, Conv+ReLU, MaxPooling and DataWriter.

Key features

  • A specialized hardware with controlled throughput and hardware cost/resources, utilizing parameterizeable layers, configurable weights, and precision settings to support fixed-point operations.
  • This hardware aim to accelerate inference operations, particulary for CNNs such as LeNet-5, VGG-16, VGG-19, AlexNet, ResNet-50, etc.
  • Customers also have flexibility to customize their own CNN models and adapt them to this hardware, specifying the number of layers, weights, and bit configurations supporting fixed-point up to 16 bits.

Block Diagram

Benefits

  • The cascaded kernels deep pipeline streamlines basic CNN operations, eliminating the need to store interlayer data externally. This reduces memory bandwidth demands, crucial for embedded FPGAs and ASICs.
  • We use a single hardware kernel for convolution and fully connected layers, improving overall resource efficiency.

Applications

  • CNN
  • Support for Transformer's LLM models is in progress or can be tailored to meet customer needs

What’s Included?

  • RTL Files
  • MATLAB Models
  • Guidance Support to Customized CNN Models for Customer

Files

Note: some files may require an NDA depending on provider policy.

Specifications

Identity

Part Number
GenCore
Vendor
Batik Semiconductor
Type
Silicon IP

Provider

Batik Semiconductor
HQ: Indonesia
To transform AI technology using cutting-edge silicon IP cores, reshaping industries and unlocking new possibilities for AI. We design and provide advanced silicon IP cores for AI accelerators, empowering researchers and industry leaders to drive innovation and solve complex challenges with cutting-edge AI technology.

Learn more about NPU IP core

Heterogeneous NPU Data Movement Tax: Intel's Own Slides Tell the Story

At Quadric, we have long argued that heterogeneous NPU designs — those that stitch together multiple specialized fixed-function engines — carry an unavoidable hidden cost: data has to move. A lot. And data movement burns power, adds latency, and creates silicon-area overhead that scales with every new generation of AI models. Now, Intel has made that case for us.

The Upcoming NPU Shakeout

The IP industry is no stranger to boom and bust cycles, and it looks to be at the crest of another wave.

Frequently asked questions about NPU IP cores

What is AI Accelerator Specifically for CNN?

AI Accelerator Specifically for CNN is a NPU IP core from Batik Semiconductor listed on Semi IP Hub.

How should engineers evaluate this NPU?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this NPU IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP