WAVE-N is a high-performance, video-specialized NPU IP designed to deliver real-time, deep learning-based image enhancement for e…
- NPU
NPU (Neural Processing Unit) IP cores are specialized AI processors designed to accelerate machine learning inference in modern SoC and ASIC designs.
These IP cores implement highly optimized architectures for tensor operations, including matrix multiplication and convolution, enabling efficient execution of deep learning models.
This catalog allows you to compare NPU IP cores from leading vendors based on TOPS performance, energy efficiency (TOPS/W), latency, and supported neural network frameworks.
Whether you are designing AI-enabled SoCs, edge devices, smart cameras, or automotive systems, you can find the right NPU IP for your AI acceleration needs.
WAVE-N is a high-performance, video-specialized NPU IP designed to deliver real-time, deep learning-based image enhancement for e…
Highly scalable inference NPU IP for next-gen AI applications
The inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and mo…
The ZIA™ A3000 AI processor IP is a low-power processor specifically designed for edge-side neural network inference processing.
AI Accelerator Specifically for CNN
Our IP inference accelerators enhance AI computations, providing outstanding performance across various applications.
4-/8-bit mixed-precision NPU IP
Features a optimized network model compiler that reduces DRAM traffic from intermediate activation data by grouped layer partitio…
Ceva-NeuPro Studio is a comprehensive software development environment designed to streamline the development and deployment of A…
The Ceva-NeuPro-Nano is a efficient and self-sufficient Edge NPU designed for Embedded ML applications.
Future-proof IP for training and inference with leading performance per watt and per dollar
Tenstorrent develops AI IP with precision, anchored in RISC-V’s open architecture, delivering specialized, silicon-proven solutio…
For device makers, a small, inexpensive, low-power chip that can run the large AI models is needed to lead the market with their …
Run-time Reconfigurable Neural Network IP
The Dynamic Neural Accelerator II (DNA-II) is a -efficient and neural network IP core that can be paired with any host processor.
Neural engine IP - Tiny and Mighty
Small, low-power dedicated AI engines are essential for home appliances, security cameras, and always-on smartphone features.
Scalable Edge NPU IP for Generative AI
Ceva-NeuPro-M is a scalable NPU architecture, ideal for transformers, Vision Transformers (ViT), and generative AI applications, …
Edge-friendly LLM and CNN AI Inference processing Edge devices are increasingly equipped with AI processing capabilities that enh…
Mobile-Centric LLM and CNN AI Inference processing Consumers are excited about the latest AI features in smartphones.
High Performance Scalability across Complex Models Cloud-based AI inference is the backbone of retail, e-commerce, healthcare, in…
Whether deployed in-cabin for driver distraction or in the driver assistance system (ADAS) stack for object recognition and point…
Optimized Neural Processing for Next-Generation Machine Learning with High-Efficiency and Scalable AI compute Characteristics Sca…
Accelerate Edge AI Innovation AI data-processing workloads at the edge are already transforming use cases and user experiences.
The new Synopsys ARC® NPX Neural Processing Unit (NPU) IP family delivers the industry’s highest performance and support for the …
High performance-efficient deep learning accelerator for edge and end-point inference
AndesAIRE™ AnDLA™ I350 is a deep learning accelerator (DLA) designed to enable high performance-efficient and cost-sensitive AI s…