WAVE-N is a high-performance, video-specialized NPU IP designed to deliver real-time, deep learning-based image enhancement for e…
- NPU
AI and Machine Learning accelerator IP cores are specialized hardware blocks designed to accelerate neural network inference and machine learning workloads in modern SoC and ASIC designs.
These IP cores, often referred to as NPU (Neural Processing Units) or AI accelerators, deliver high performance and energy efficiency for applications such as computer vision, speech recognition, natural language processing, and autonomous systems.
This catalog allows you to compare AI/ML accelerator IP cores from leading vendors by performance (TOPS), power efficiency, supported frameworks, and process node compatibility.
Whether you are targeting edge AI devices, automotive systems, consumer electronics, or data center acceleration, you can identify the most suitable AI IP for your design.
WAVE-N is a high-performance, video-specialized NPU IP designed to deliver real-time, deep learning-based image enhancement for e…
Highly scalable inference NPU IP for next-gen AI applications
The inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and mo…
The ZIA™ A3000 AI processor IP is a low-power processor specifically designed for edge-side neural network inference processing.
AI Accelerator Specifically for CNN
Our IP inference accelerators enhance AI computations, providing outstanding performance across various applications.
ComputeRAM is an SRAM macro with integrated compute capability.
IP cores for ultra-low power AI-enabled devices
Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler.
4-/8-bit mixed-precision NPU IP
Features a optimized network model compiler that reduces DRAM traffic from intermediate activation data by grouped layer partitio…
Low Power AI Micro-Core IP generated with a fully customizable and flexible core generator.
Ultra low power inference engine
Innatera's ultra-efficient neuromorphic processors mimic the brain's mechanisms for processing sensory data.
Configurable AI inference processor IP, which can optimize the performance and size and process all data such as images, videos, …
DeepMentor has developed an AI IP that combines low-power and high-performance features with the RISC-V SOC.
RISC-V-Based, Open Source AI Accelerator for the Edge
Coral NPU is a machine learning (ML) accelerator core designed for energy-efficient AI at the edge.
License our analog in-memory compute macros (e.g., 32×32 X1 crossbars) for integration into your ASIC or SoC.
Neuromorphic Processor IP (Second Generation)
Akida is a neural processor platform inspired by the cognitive ability and efficiency of the human brain.
Ceva-NeuPro Studio is a comprehensive software development environment designed to streamline the development and deployment of A…
The Ceva-NeuPro-Nano is a efficient and self-sufficient Edge NPU designed for Embedded ML applications.
Introducing Gyrus's ground-breaking AI Processor Accelerator IP, coupled with a native graph processing software stack, is the ul…
Future-proof IP for training and inference with leading performance per watt and per dollar
Tenstorrent develops AI IP with precision, anchored in RISC-V’s open architecture, delivering specialized, silicon-proven solutio…
All-analog Neural Signal Processor
Empowering Innovations with Blumind's Proprietary Architecture Unlock the true potential of analog AI compute with Blumind's cutt…
Performance AI Accelerator for Edge Computing
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…