ComputeRAM is an SRAM macro with integrated compute capability.
- Edge AI Accelerator
Edge AI Accelerator IP cores enable low-power on-device machine-learning inference in modern SoC and ASIC designs.
These IP cores are optimized for neural-network execution at the edge, combining low latency, efficient tensor processing, and strong TOPS-per-watt characteristics
This catalog allows you to compare Edge AI Accelerator IP cores from leading vendors based on AI performance, power efficiency, latency, and process node compatibility.
Whether you are designing smart cameras, IoT devices, wearables, or automotive systems, you can find the right Edge AI Accelerator IP for your application.
ComputeRAM is an SRAM macro with integrated compute capability.
IP cores for ultra-low power AI-enabled devices
Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler.
Ultra low power inference engine
Innatera's ultra-efficient neuromorphic processors mimic the brain's mechanisms for processing sensory data.
Configurable AI inference processor IP, which can optimize the performance and size and process all data such as images, videos, …
RISC-V-Based, Open Source AI Accelerator for the Edge
Coral NPU is a machine learning (ML) accelerator core designed for energy-efficient AI at the edge.
License our analog in-memory compute macros (e.g., 32×32 X1 crossbars) for integration into your ASIC or SoC.
Neuromorphic Processor IP (Second Generation)
Akida is a neural processor platform inspired by the cognitive ability and efficiency of the human brain.
DeepMentor has developed an AI IP that combines low-power and high-performance features with the RISC-V SOC.
Introducing Gyrus's ground-breaking AI Processor Accelerator IP, coupled with a native graph processing software stack, is the ul…
Performance AI Accelerator for Edge Computing
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…
Performance Efficiency AI Accelerator
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…
Lowest Power and Cost End Point AI Accelerator
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…
Highly scalable performance for classic and generative on-device and edge AI solutions
Scalable and Power-Efficient Neural Processing Units The Neo NPUs offer energy-efficient hardware-based AI engines that can be pa…
Low-power high-speed reconfigurable processor to accelerate AI everywhere.
Zhufeng-800: A low-power high-speed reconfigurable processor to accelerate AI everywhere.
IP platform for intelligence gathering chips at the Edge
Designed to be the solution for an AI compute device right at the Edge, Sondrel’s new SFA 100 IP reference platform makes creatin…
Neural-network-based noise cancellation
In a world where videoconferencing, team gaming, and voice-operated systems proliferate, it is vital to extract clear, intelligib…
So_ip_idt core can be used create a decision tree directly in hardware.
GPNPU Processor IP - 32 to 864TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 16 to 108 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 4 to 28 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…