Convolutional Neural Network (CNN) Compact Accelerator
Take advantage of the power of FPGA’s parallel processing to implement CNNs.
- NPU
Convolutional Neural Network (CNN) Compact Accelerator
Take advantage of the power of FPGA’s parallel processing to implement CNNs.
AI Accelerator Specifically for CNN
Our IP inference accelerators enhance AI computations, providing outstanding performance across various applications.
Accelerator for Convolutional Neural Networks
Gyrfalcon Technologies(GTI) offers silicon proven, acceleration IP for Convolutional Neural Networks used in image classification…
Designed to enable low power signal conditioning for IoT edge endpoints.
Edge-friendly LLM and CNN AI Inference processing Edge devices are increasingly equipped with AI processing capabilities that enh…
Mobile-Centric LLM and CNN AI Inference processing Consumers are excited about the latest AI features in smartphones.
High Performance Scalability across Complex Models Cloud-based AI inference is the backbone of retail, e-commerce, healthcare, in…
Whether deployed in-cabin for driver distraction or in the driver assistance system (ADAS) stack for object recognition and point…
High performance-efficient deep learning accelerator for edge and end-point inference
AndesAIRE™ AnDLA™ I350 is a deep learning accelerator (DLA) designed to enable high performance-efficient and cost-sensitive AI s…
Neural engine IP - Tiny and Mighty
Small, low-power dedicated AI engines are essential for home appliances, security cameras, and always-on smartphone features.
Neural engine IP - AI Inference for the Highest Performing Systems
From data centers to autonomous cars, the most demanding AI applications need high-performance NPUs with the lowest possible late…
Neural engine IP - The Cutting Edge in On-Device AI
With support for the latest generative AI models and traditional RNN, CNN, and LSTM models, the Origin™ E6 NPUs scale from 16 to …
Neural engine IP - Balanced Performance for AI Inference
On-device AI is a must-have for many new designs.
eFPGA IP — Flexible Reconfigurable Logic Acceleration Core
RapidFlex eFPGA IP provides a reconfigurable, upgradeable, and iterative logic computing layer for SoCs, MCUs, AI accelerators, i…