AI Processor IP Cores

AI Processor IP cores provide high-performance processing power for AI algorithms, enabling real-time data analysis, pattern recognition, and decision-making. Supporting popular AI frameworks, AI Processor IP cores are ideal for applications in edge computing, autonomous vehicles, robotics, and smart devices.

All offers in AI Processor IP Cores
Filter
Filter

Login required.

Sign in

Login required.

Sign in

Login required.

Sign in

Compare 95 AI Processor IP Cores from 40 vendors (1 - 10)
  • LLM Accelerator IP for Multimodal, Agentic Intelligence
    • HyperThought is a cutting-edge LLM accelerator IP designed to revolutionize AI applications.
    • Built for the demands of multimodal and agentic intelligence, HyperThought delivers unparalleled performance, efficiency, and security.
    Block Diagram -- LLM Accelerator IP for Multimodal, Agentic Intelligence
  • AI IP Core
    • The low-power and high-perFormance Al IP developed by DeepMentor integrates the SOC of RISC-V. Customers can quickly integrate a unique combination oF silicon intellectual property into an Al SOC chip.
    • System manufacturers do not need to worry about the problems of Al soFtware integration and system development, and can immediately have unique AI products in the market
    Block Diagram -- AI IP Core
  • High-Performance Memory Expansion IP for AI Accelerators
    • Expand Effective HBM Capacity by up to 50%
    • Enhance AI Accelerator Throughput
    • Boost Effective HBM Bandwidth
    • Integrated Address Translation and memory management:
    Block Diagram -- High-Performance Memory Expansion IP for AI Accelerators
  • Neural engine IP - Tiny and Mighty
    • The Origin E1 NPUs are individually customized to various neural networks commonly deployed in edge devices, including home appliances, smartphones, and security cameras.
    • For products like these that require dedicated AI processing that minimizes power consumption, silicon area, and system cost, E1 cores offer the lowest power consumption and area in a 1 TOPS engine.
    Block Diagram -- Neural engine IP - Tiny and Mighty
  • Fully-coherent RISC-V Tensor Unit
    • The bulk of computations in Large Language Models (LLMs) is in fully-connected layers that can be efficiently implemented as matrix multiplication.
    • The Tensor Unit provides hardware specifically tailored to matrix multiplication workloads, resulting in a huge performance boost for AI without a big power consumption.
    Block Diagram -- Fully-coherent RISC-V Tensor Unit
  • IP library for the acceleration of edge AI/ML
    • A library with a wide selection of hardware IPs for the design of modular and flexible SoCs that enable end-to-end inference on miniaturized systems.
    • Available IP categories include ML accelerators, dedicated memory systems, the RISC-V based 32-bit processor core icyflex-V, and peripherals.
    Block Diagram -- IP library for the acceleration of edge AI/ML
  • Vision AI DSP
    • Ceva-SensPro is a family of DSP cores architected to combine vision, Radar, and AI processing in a single architecture.
    • The silicon-proven cores provide scalable performance to cover a wide range of applications that combine vision processing, Radar/LiDAR processing, and AI inferencing to interpret their surroundings. These include automotive, robotics, surveillance, AR/VR, mobile devices, and smart homes.
    Block Diagram -- Vision AI DSP
  • Compact neural network engine offering scalable performance (32, 64, or 128 MACs) at very low energy footprints
    • Best-in-Class Energy
    • Enables Compelling Use Cases and Advanced Concurrency
    • Scalable IP for Various Workloads
    Block Diagram -- Compact neural network engine offering scalable performance (32, 64, or 128 MACs) at very low energy footprints
  • Tensilica AI Max - NNA 110 Single Core
    • Scalable Design to Adapt to Various AI Workloads
    • Efficient in Mapping State-of-the-Art DL/AI Workloads
    • End-to-End Software Toolchain for All Markets and Large Number of Frameworks
    Block Diagram -- Tensilica AI Max - NNA 110 Single Core
  • NPU IP for Data Center and Automotive
    • 128-bit vector processing unit (shader + ext)
    • OpenCL 1.2 shader instruction set
    • Enhanced vision instruction set (EVIS)
    • INT 8/16/32b, Float 16/32b in PPU
    • Convolution layers
    Block Diagram -- NPU IP for Data Center and Automotive
×
Semiconductor IP