AI inference engine for real-time edge intelligence
Today’s robots specialize in narrowly defined tasks, built for simple automation of repetitive behavior.
- Edge AI Accelerator
AI inference engine for real-time edge intelligence
Today’s robots specialize in narrowly defined tasks, built for simple automation of repetitive behavior.
Even the smallest, lowest-power audio devices embed AI capabilities to enhance the user experience.
Neural engine IP - AI Inference for the Highest Performing Systems
From data centers to autonomous cars, the most demanding AI applications need high-performance NPUs with the lowest possible late…
Neural engine IP - Balanced Performance for AI Inference
On-device AI is a must-have for many new designs.
Configurable AI inference processor IP, which can optimize the performance and size and process all data such as images, videos, …
Ceva-NeuPro Studio is a comprehensive software development environment designed to streamline the development and deployment of A…
The Ceva-NeuPro-Nano is a efficient and self-sufficient Edge NPU designed for Embedded ML applications.
High-Performance Memory Expansion IP for AI Accelerators
AI inference performance is increasingly constrained by memory bandwidth and capacity - not compute.
Ceva-SensPro is a family of DSP cores architected to combine vision, Radar, and AI processing in a single architecture.
Run-time Reconfigurable Neural Network IP
The Dynamic Neural Accelerator II (DNA-II) is a -efficient and neural network IP core that can be paired with any host processor.
Low-power high-speed reconfigurable processor to accelerate AI everywhere.
Zhufeng-800: A low-power high-speed reconfigurable processor to accelerate AI everywhere.
Multi-core capable 64-bit RISC-V CPU with vector extensions
The SiFive® Intelligence™ X180 core IP products are designed to meet the increasing requirements of embedded IoT and AI at the fa…
Multi-core capable 32-bit RISC-V CPU with vector extensions
The SiFive® Intelligence™ X160 core IP products are designed to meet the increasing requirements of embedded IoT and AI at the fa…
Edge-friendly LLM and CNN AI Inference processing Edge devices are increasingly equipped with AI processing capabilities that enh…
Mobile-Centric LLM and CNN AI Inference processing Consumers are excited about the latest AI features in smartphones.
High Performance Scalability across Complex Models Cloud-based AI inference is the backbone of retail, e-commerce, healthcare, in…
TPU IoT/Edge Licensable Hardware IP
The Prodigy is the first Universal Processor combining General Purpose Processors, High Performance Computng (HPC), Artficial Int…
These eFPGA IP cores offer designers the flexibility to tailor resources to their application requirements, available as either S…
400G UDP/IP Hardware Protocol Stack
Implements a UDP/IP hardware protocol stack that enables high-speed communication over a LAN or a point-to-point connection.
Future-proof IP for training and inference with leading performance per watt and per dollar
Tenstorrent develops AI IP with precision, anchored in RISC-V’s open architecture, delivering specialized, silicon-proven solutio…