AI DSA Processor - 9-Stage Pipeline, Dual-issue
NI900 is a DSA processor based on 900 Series.
- Edge AI Accelerator
AI DSA Processor - 9-Stage Pipeline, Dual-issue
NI900 is a DSA processor based on 900 Series.
The Ceva-NeuPro-Nano is a efficient and self-sufficient Edge NPU designed for Embedded ML applications.
RISC-V-Based, Open Source AI Accelerator for the Edge
Coral NPU is a machine learning (ML) accelerator core designed for energy-efficient AI at the edge.
Hierarchical scalability is the foundation principle of the Fibonacci machine-learning (ML) system-on-chip (SoC).
Safety Enhanced GPNPU Processor IP
Automotive applications are uniquely demanding for any AI acceleration solution.
Neural engine IP - AI Inference for the Highest Performing Systems
From data centers to autonomous cars, the most demanding AI applications need high-performance NPUs with the lowest possible late…
Neural engine IP - The Cutting Edge in On-Device AI
With support for the latest generative AI models and traditional RNN, CNN, and LSTM models, the Origin™ E6 NPUs scale from 16 to …
Neural engine IP - Balanced Performance for AI Inference
On-device AI is a must-have for many new designs.
Neural engine IP - Tiny and Mighty
Small, low-power dedicated AI engines are essential for home appliances, security cameras, and always-on smartphone features.
Ceva-NeuPro Studio is a comprehensive software development environment designed to streamline the development and deployment of A…
SFA 300 is designed to implement scalable processing applications supporting local networking.
Edge-friendly LLM and CNN AI Inference processing Edge devices are increasingly equipped with AI processing capabilities that enh…
Mobile-Centric LLM and CNN AI Inference processing Consumers are excited about the latest AI features in smartphones.
High Performance Scalability across Complex Models Cloud-based AI inference is the backbone of retail, e-commerce, healthcare, in…
Whether deployed in-cabin for driver distraction or in the driver assistance system (ADAS) stack for object recognition and point…
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
The GenAI IP is the smallest version of our NPU, tailored to small devices such as FPGAs and Adaptive SoCs, where the maximum Fre…
Enhanced Neural Processing Unit providing 98,304 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 8,192 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 65,536 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…