Accelerate Edge AI Innovation AI data-processing workloads at the edge are already transforming use cases and user experiences.
- NPU
Accelerate Edge AI Innovation AI data-processing workloads at the edge are already transforming use cases and user experiences.
RISC-V-Based, Open Source AI Accelerator for the Edge
Coral NPU is a machine learning (ML) accelerator core designed for energy-efficient AI at the edge.
The Ceva-NeuPro-Nano is a efficient and self-sufficient Edge NPU designed for Embedded ML applications.
Highly scalable inference NPU IP for next-gen AI applications
The inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and mo…
Scalable Edge NPU IP for Generative AI
Ceva-NeuPro-M is a scalable NPU architecture, ideal for transformers, Vision Transformers (ViT), and generative AI applications, …
4-/8-bit mixed-precision NPU IP
Features a optimized network model compiler that reduces DRAM traffic from intermediate activation data by grouped layer partitio…
Edge-friendly LLM and CNN AI Inference processing Edge devices are increasingly equipped with AI processing capabilities that enh…
Mobile-Centric LLM and CNN AI Inference processing Consumers are excited about the latest AI features in smartphones.
High Performance Scalability across Complex Models Cloud-based AI inference is the backbone of retail, e-commerce, healthcare, in…
Whether deployed in-cabin for driver distraction or in the driver assistance system (ADAS) stack for object recognition and point…
Optimized Neural Processing for Next-Generation Machine Learning with High-Efficiency and Scalable AI compute Characteristics Sca…
WAVE-N is a high-performance, video-specialized NPU IP designed to deliver real-time, deep learning-based image enhancement for e…
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
The new Synopsys ARC® NPX Neural Processing Unit (NPU) IP family delivers the industry’s highest performance and support for the …
The ZIA™ A3000 AI processor IP is a low-power processor specifically designed for edge-side neural network inference processing.
Neural network processor designed for edge devices
Kneron NPU IP Series are neural network processors that have been designed for edge devices.
NPU IP for Data Center and Automotive
The VIP9400 processing family offers programmable, scalable and extendable solutions for markets that demand real time and AI dev…
NPU IP for Wearable and IoT Market
The VIP9000Pico family offers low power, programmable, scalable and extendable solutions for markets that demand low power AI dev…
NPU IP for AI Vision and AI Voice
The VIP9000 family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devi…
Ceva-NeuPro Studio is a comprehensive software development environment designed to streamline the development and deployment of A…