AI Accelerator Specifically for CNN
Our IP inference accelerators enhance AI computations, providing outstanding performance across various applications.
- NPU
AI Accelerator Specifically for CNN
Our IP inference accelerators enhance AI computations, providing outstanding performance across various applications.
Convolutional Neural Network (CNN) Compact Accelerator
Take advantage of the power of FPGA’s parallel processing to implement CNNs.
DeepMentor has developed an AI IP that combines low-power and high-performance features with the RISC-V SOC.
Edge-friendly LLM and CNN AI Inference processing Edge devices are increasingly equipped with AI processing capabilities that enh…
Mobile-Centric LLM and CNN AI Inference processing Consumers are excited about the latest AI features in smartphones.
High Performance Scalability across Complex Models Cloud-based AI inference is the backbone of retail, e-commerce, healthcare, in…
Whether deployed in-cabin for driver distraction or in the driver assistance system (ADAS) stack for object recognition and point…
Highly scalable performance for classic and generative on-device and edge AI solutions
Scalable and Power-Efficient Neural Processing Units The Neo NPUs offer energy-efficient hardware-based AI engines that can be pa…
Designed to enable low power signal conditioning for IoT edge endpoints.
Verification IP for CSI/DSI/C-PHY/D-PHY
A comprehensive VIP solution for CSI-2, DSI-2, D-PHY and C-PHY transmitter and receiver designs.
High performance-efficient deep learning accelerator for edge and end-point inference
AndesAIRE™ AnDLA™ I350 is a deep learning accelerator (DLA) designed to enable high performance-efficient and cost-sensitive AI s…
IP cores for ultra-low power AI-enabled devices
Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler.
A new class of machine learning (ML) processor, called a microNPU, specifically designed to accelerate ML inference in area-const…
Neural network processor designed for edge devices
Kneron NPU IP Series are neural network processors that have been designed for edge devices.
Accelerator for Convolutional Neural Networks
Gyrfalcon Technologies(GTI) offers silicon proven, acceleration IP for Convolutional Neural Networks used in image classification…
Neural engine IP - Tiny and Mighty
Small, low-power dedicated AI engines are essential for home appliances, security cameras, and always-on smartphone features.
Neural engine IP - AI Inference for the Highest Performing Systems
From data centers to autonomous cars, the most demanding AI applications need high-performance NPUs with the lowest possible late…
Neural engine IP - The Cutting Edge in On-Device AI
With support for the latest generative AI models and traditional RNN, CNN, and LSTM models, the Origin™ E6 NPUs scale from 16 to …
Neural engine IP - Balanced Performance for AI Inference
On-device AI is a must-have for many new designs.
eFPGA IP — Flexible Reconfigurable Logic Acceleration Core
RapidFlex eFPGA IP provides a reconfigurable, upgradeable, and iterative logic computing layer for SoCs, MCUs, AI accelerators, i…