400G UDP/IP Hardware Protocol Stack
Implements a UDP/IP hardware protocol stack that enables high-speed communication over a LAN or a point-to-point connection.
- Ethernet
400G UDP/IP Hardware Protocol Stack
Implements a UDP/IP hardware protocol stack that enables high-speed communication over a LAN or a point-to-point connection.
High-Performance Memory Expansion IP for AI Accelerators
AI inference performance is increasingly constrained by memory bandwidth and capacity - not compute.
AI Accelerator Specifically for CNN
Our IP inference accelerators enhance AI computations, providing outstanding performance across various applications.
Tensilica AI Max - NNA 110 Single Core
Single-core neural network accelerator offering from 0.5 to 4 TOPS Optimized for machine learning inference applications The Cade…
Multi-core capable 64-bit RISC-V CPU with vector extensions
The SiFive® Intelligence™ X180 core IP products are designed to meet the increasing requirements of embedded IoT and AI at the fa…
Multi-core capable 32-bit RISC-V CPU with vector extensions
The SiFive® Intelligence™ X160 core IP products are designed to meet the increasing requirements of embedded IoT and AI at the fa…
These eFPGA IP cores offer designers the flexibility to tailor resources to their application requirements, available as either S…
RISC-V AI Acceleration Platform - Scalable, standards-aligned soft chiplet IP
Unlocking AI with Open Compute Traditional AI processors force customers into closed hardware-software ecosystems that limit inno…
Safety Enhanced GPNPU Processor IP
Automotive applications are uniquely demanding for any AI acceleration solution.
High performance-efficient deep learning accelerator for edge and end-point inference
AndesAIRE™ AnDLA™ I350 is a deep learning accelerator (DLA) designed to enable high performance-efficient and cost-sensitive AI s…
Ceva-NeuPro Studio is a comprehensive software development environment designed to streamline the development and deployment of A…
Ceva-SensPro is a family of DSP cores architected to combine vision, Radar, and AI processing in a single architecture.
Neural engine IP - AI Inference for the Highest Performing Systems
From data centers to autonomous cars, the most demanding AI applications need high-performance NPUs with the lowest possible late…
Neural engine IP - Balanced Performance for AI Inference
On-device AI is a must-have for many new designs.
The Ceva-NeuPro-Nano is a efficient and self-sufficient Edge NPU designed for Embedded ML applications.
Neural engine IP - The Cutting Edge in On-Device AI
With support for the latest generative AI models and traditional RNN, CNN, and LSTM models, the Origin™ E6 NPUs scale from 16 to …
Run-time Reconfigurable Neural Network IP
The Dynamic Neural Accelerator II (DNA-II) is a -efficient and neural network IP core that can be paired with any host processor.
Edge-friendly LLM and CNN AI Inference processing Edge devices are increasingly equipped with AI processing capabilities that enh…
Mobile-Centric LLM and CNN AI Inference processing Consumers are excited about the latest AI features in smartphones.
High Performance Scalability across Complex Models Cloud-based AI inference is the backbone of retail, e-commerce, healthcare, in…