Whether deployed in-cabin for driver distraction or in the driver assistance system (ADAS) stack for object recognition and point…
- NPU
Whether deployed in-cabin for driver distraction or in the driver assistance system (ADAS) stack for object recognition and point…
NPU IP for Data Center and Automotive
The VIP9400 processing family offers programmable, scalable and extendable solutions for markets that demand real time and AI dev…
The ASIL B or D Ready ARC NPX6FS NPUs enable automotive system-on-chip (SoC) designers to accelerate ISO 26262 certification of D…
The ASIL B or D Ready ARC NPX6FS NPUs enable automotive system-on-chip (SoC) designers to accelerate ISO 26262 certification of D…
The ASIL B or D Ready ARC NPX6FS NPUs enable automotive system-on-chip (SoC) designers to accelerate ISO 26262 certification of D…
The ASIL B or D Ready ARC NPX6FS NPUs enable automotive system-on-chip (SoC) designers to accelerate ISO 26262 certification of D…
The ASIL B or D Ready ARC NPX6FS NPUs enable automotive system-on-chip (SoC) designers to accelerate ISO 26262 certification of D…
The ASIL B or D Ready ARC NPX6FS NPUs enable automotive system-on-chip (SoC) designers to accelerate ISO 26262 certification of D…
The ASIL B or D Ready ARC NPX6FS NPUs enable automotive system-on-chip (SoC) designers to accelerate ISO 26262 certification of D…
The ASIL B or D Ready ARC NPX6FS NPUs enable automotive system-on-chip (SoC) designers to accelerate ISO 26262 certification of D…
The ASIL B or D Ready ARC NPX6FS NPUs enable automotive system-on-chip (SoC) designers to accelerate ISO 26262 certification of D…
Highly scalable inference NPU IP for next-gen AI applications
The inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and mo…
Low Power Ultra-wideband (UWB) IP
The Ceva-Waves UWB platform cuts the development time and risk for implementing a wide range of UWB functionality in SoCs.
RISC-V-Based, Open Source AI Accelerator for the Edge
Coral NPU is a machine learning (ML) accelerator core designed for energy-efficient AI at the edge.
Highly scalable performance for classic and generative on-device and edge AI solutions
Scalable and Power-Efficient Neural Processing Units The Neo NPUs offer energy-efficient hardware-based AI engines that can be pa…
GPNPU Processor IP - 32 to 864TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 16 to 108 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 4 to 28 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 1 to 7 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
The new Synopsys ARC® NPX Neural Processing Unit (NPU) IP family delivers the industry’s highest performance and support for the …