Highly scalable inference NPU IP for next-gen AI applications
The inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and mo…
- NPU
- Now
Highly scalable inference NPU IP for next-gen AI applications
The inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and mo…
4-/8-bit mixed-precision NPU IP
Features a optimized network model compiler that reduces DRAM traffic from intermediate activation data by grouped layer partitio…
FlexNoC 5 Option For Scalability and Performance Critical Systems
Arteris IP FlexNoC Performance Option accelerates development of next-generation deep neural network (DNN) and machine learning s…
Low-power high-speed reconfigurable processor to accelerate AI everywhere.
Zhufeng-800: A low-power high-speed reconfigurable processor to accelerate AI everywhere.
Discover the AON1000™, our edge hardware and software AI IP, offering unmatched efficiency and accuracy for voice and sound recog…
Machine vision and deep learning are being embedded in integrated SoCs and expanding into high-volume applications such as automo…
Produced by DRAM manufacturers such as Samsung and Micron, High Bandwidth Memory or HBM, provides users with high bandwidth, low …
Vivante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Network…
Viviante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Networ…
Viviante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Networ…
NPU IP for Data Center and Automotive
The VIP9400 processing family offers programmable, scalable and extendable solutions for markets that demand real time and AI dev…
NPU IP for AI Vision and AI Voice
The VIP9000 family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devi…
Enhanced Neural Processing Unit providing 98,304 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 8,192 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 65,536 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 49,152 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 4096 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 32,768 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 24,576 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 16,384 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…