AI IP
Filter
Compare
658
IP
from 117 vendors
(1
-
10)
-
NPU IP for Embedded AI
- Fully programmable to efficiently execute Neural Networks, feature extraction, signal processing, audio and control code
- Scalable performance by design to meet wide range of use cases with MAC configurations with up to 64 int8 (native 128 of 4x8) MACs per cycle
- Future proof architecture that supports the most advanced ML data types and operators
-
NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
- Support wide range of activations & weights data types, from 32-bit Floating Point down to 2-bit Binary Neural Networks (BNN)
-
Complete Neural Processor for Edge AI
- Designed for Low-Power Neural Network Processing
- Flexible Training Methods
- Scalable Neuron Fabric
-
AI accelerator (NPU) IP - 1 to 20 TOPS
- Performance efficient 18 TOPS/Watt
- Scalable performance from 2-9K MACS
- Capable of processing HD images on chip
-
RISC-V GPGPU for 3D graphics and AI at the edge
- GPGPU: 3D, Vector & 2.5D Graphics, AI
- ISA: RV64IMFC + custom GFX & AI extensions
- Vertex / Shader Processing: Unified Fully Programmable LLVM C/C++ RISCV
-
Highly scalable inference NPU IP for next-gen AI applications
- Matrix Multiplication: 4096 MACs/cycles (int 8), 1024 MACs/cycles (int 16)
- Vector processor: RISC-V with RVV 1.0
- Custom instructions for softmax and local storage access
-
Compact neural network engine offering scalable performance (32, 64, or 128 MACs) at very low energy footprints
- Best-in-Class Energy
- Enables Compelling Use Cases and Advanced Concurrency
- Scalable IP for Various Workloads
-
Highly scalable performance for classic and generative on-device and edge AI solutions
- Flexible System Integration
- Scalable Design and Configurability
- Efficient in Mapping State-of-the-Art AI/ML Workloads
-
Tensilica AI Max - NNA 110 Single Core
- Scalable Design to Adapt to Various AI Workloads
- Efficient in Mapping State-of-the-Art DL/AI Workloads
- End-to-End Software Toolchain for All Markets and Large Number of Frameworks
-
AI Accelerator (NPU) IP - 3.2 GOPS for Audio Applications
- 3.2 GOPS
- Ultra-low <300uW power consumption
- Low latency