ComputeRAM is an SRAM macro with integrated compute capability.
- Edge AI Accelerator
Edge AI Accelerator IP cores enable low-power on-device machine-learning inference in modern SoC and ASIC designs.
These IP cores are optimized for neural-network execution at the edge, combining low latency, efficient tensor processing, and strong TOPS-per-watt characteristics
This catalog allows you to compare Edge AI Accelerator IP cores from leading vendors based on AI performance, power efficiency, latency, and process node compatibility.
Whether you are designing smart cameras, IoT devices, wearables, or automotive systems, you can find the right Edge AI Accelerator IP for your application.
ComputeRAM is an SRAM macro with integrated compute capability.
IP cores for ultra-low power AI-enabled devices
Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler.
Low Power AI Micro-Core IP generated with a fully customizable and flexible core generator.
Ultra low power inference engine
Innatera's ultra-efficient neuromorphic processors mimic the brain's mechanisms for processing sensory data.
Configurable AI inference processor IP, which can optimize the performance and size and process all data such as images, videos, …
DeepMentor has developed an AI IP that combines low-power and high-performance features with the RISC-V SOC.
RISC-V-Based, Open Source AI Accelerator for the Edge
Coral NPU is a machine learning (ML) accelerator core designed for energy-efficient AI at the edge.
License our analog in-memory compute macros (e.g., 32×32 X1 crossbars) for integration into your ASIC or SoC.
Neuromorphic Processor IP (Second Generation)
Akida is a neural processor platform inspired by the cognitive ability and efficiency of the human brain.
Introducing Gyrus's ground-breaking AI Processor Accelerator IP, coupled with a native graph processing software stack, is the ul…
All-analog Neural Signal Processor
Empowering Innovations with Blumind's Proprietary Architecture Unlock the true potential of analog AI compute with Blumind's cutt…
Performance AI Accelerator for Edge Computing
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…
Performance Efficiency AI Accelerator
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…
Lowest Power and Cost End Point AI Accelerator
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…
Highly scalable performance for classic and generative on-device and edge AI solutions
Scalable and Power-Efficient Neural Processing Units The Neo NPUs offer energy-efficient hardware-based AI engines that can be pa…
Low-power high-speed reconfigurable processor to accelerate AI everywhere.
Zhufeng-800: A low-power high-speed reconfigurable processor to accelerate AI everywhere.
Even the smallest, lowest-power audio devices embed AI capabilities to enhance the user experience.
Designed to enable low power signal conditioning for IoT edge endpoints.
IP platform for intelligence gathering chips at the Edge
Designed to be the solution for an AI compute device right at the Edge, Sondrel’s new SFA 100 IP reference platform makes creatin…
Ceva-MotionEngine is Ceva’s core sensor processing software system and is the product of over 20 years of experience developing s…