AI Processor IP Cores
AI Processor IP cores provide high-performance processing power for AI algorithms, enabling real-time data analysis, pattern recognition, and decision-making. Supporting popular AI frameworks, AI Processor IP cores are ideal for applications in edge computing, autonomous vehicles, robotics, and smart devices.
All offers in
AI Processor IP Cores
Filter
Compare
94
AI Processor IP Cores
from 39 vendors
(1
-
10)
-
Neural engine IP - Tiny and Mighty
- The Origin E1 NPUs are individually customized to various neural networks commonly deployed in edge devices, including home appliances, smartphones, and security cameras.
- For products like these that require dedicated AI processing that minimizes power consumption, silicon area, and system cost, E1 cores offer the lowest power consumption and area in a 1 TOPS engine.
-
Fully-coherent RISC-V Tensor Unit
- The bulk of computations in Large Language Models (LLMs) is in fully-connected layers that can be efficiently implemented as matrix multiplication.
- The Tensor Unit provides hardware specifically tailored to matrix multiplication workloads, resulting in a huge performance boost for AI without a big power consumption.
-
IP library for the acceleration of edge AI/ML
- A library with a wide selection of hardware IPs for the design of modular and flexible SoCs that enable end-to-end inference on miniaturized systems.
- Available IP categories include ML accelerators, dedicated memory systems, the RISC-V based 32-bit processor core icyflex-V, and peripherals.
-
Vision AI DSP
- Ceva-SensPro is a family of DSP cores architected to combine vision, Radar, and AI processing in a single architecture.
- The silicon-proven cores provide scalable performance to cover a wide range of applications that combine vision processing, Radar/LiDAR processing, and AI inferencing to interpret their surroundings. These include automotive, robotics, surveillance, AR/VR, mobile devices, and smart homes.
-
High-Performance Memory Expansion IP for AI Accelerators
- Expand Effective HBM Capacity by up to 50%
- Enhance AI Accelerator Throughput
- Boost Effective HBM Bandwidth
- Integrated Address Translation and memory management:
-
Compact neural network engine offering scalable performance (32, 64, or 128 MACs) at very low energy footprints
- Best-in-Class Energy
- Enables Compelling Use Cases and Advanced Concurrency
- Scalable IP for Various Workloads
-
Tensilica AI Max - NNA 110 Single Core
- Scalable Design to Adapt to Various AI Workloads
- Efficient in Mapping State-of-the-Art DL/AI Workloads
- End-to-End Software Toolchain for All Markets and Large Number of Frameworks
-
NPU IP for Data Center and Automotive
- 128-bit vector processing unit (shader + ext)
- OpenCL 1.2 shader instruction set
- Enhanced vision instruction set (EVIS)
- INT 8/16/32b, Float 16/32b in PPU
- Convolution layers
-
NPU IP for Wearable and IoT Market
- ML inference engine for deeply embedded system
NN Engine
Supports popular ML frameworks
Support wide range of NN algorithms and flexible in layer ordering
- ML inference engine for deeply embedded system
-
NPU IP for AI Vision and AI Voice
- 128-bit vector processing unit (shader + ext)
- OpenCL 3.0 shader instruction set
- Enhanced vision instruction set (EVIS)
- INT 8/16/32b, Float 16/32b