Ultra low power inference engine
Innatera's ultra-efficient neuromorphic processors mimic the brain's mechanisms for processing sensory data.
- Edge AI Accelerator
- Silicon proven
Ultra low power inference engine
Innatera's ultra-efficient neuromorphic processors mimic the brain's mechanisms for processing sensory data.
Neuromorphic Processor IP (Second Generation)
Akida is a neural processor platform inspired by the cognitive ability and efficiency of the human brain.
Akida is a neural processor platform inspired by the cognitive ability and efficiency of the human brain.
Enhanced Neural Processing Unit providing 98,304 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 8,192 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 65,536 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 49,152 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 4096 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 32,768 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 24,576 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 16,384 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 12,288 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
Enhanced Neural Processing Unit providing 1024 MACs/cycle of performance for AI applications
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
The ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applica…
The new Synopsys ARC® NPX Neural Processing Unit (NPU) IP family delivers the industry’s highest performance and support for the …
GPNPU Processor IP - 32 to 864TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 16 to 108 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 4 to 28 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 1 to 7 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
Ultra High-Speed Cache Memory Compiler - 2-Port Register File - TSMC N3P
The Ultra High-Speed cache memory is an adaptable, independent, non-coherent cache Intellectual Property (IP) featuring an cache …