AI Processor IP Cores

AI Processor IP cores provide high-performance processing power for AI algorithms, enabling real-time data analysis, pattern recognition, and decision-making. Supporting popular AI frameworks, AI Processor IP cores are ideal for applications in edge computing, autonomous vehicles, robotics, and smart devices.

All offers in AI Processor IP Cores
Filter
Filter

Login required.

Sign in

Login required.

Sign in

Login required.

Sign in

Compare 94 AI Processor IP Cores from 40 vendors (1 - 10)
  • Neuromorphic Processor IP (Second Generation)
    • Supports 8-, 4-, and 1-bit weights and activations
    • Programmable Activation Functions
    • Skip Connections
    • Support for Spatio-Temporal and Temporal Event-Based Neural Network
    Block Diagram -- Neuromorphic Processor IP (Second Generation)
  • Neuromorphic Processor IP
    • Supports 4-, 2-, and 1-bit weights and activations
    • Supports multiple layers simultaneously
    • Convolutional Neural Processor (CNP) and
    • Fully-connected Neural Processor (FNP)
    Block Diagram -- Neuromorphic Processor IP
  • Neural engine IP - Tiny and Mighty
    • The Origin E1 NPUs are individually customized to various neural networks commonly deployed in edge devices, including home appliances, smartphones, and security cameras.
    • For products like these that require dedicated AI processing that minimizes power consumption, silicon area, and system cost, E1 cores offer the lowest power consumption and area in a 1 TOPS engine.
    Block Diagram -- Neural engine IP - Tiny and Mighty
  • High-Performance NPU
    • The ZIA™ A3000 AI processor IP is a low-power processor specifically designed for edge-side neural network inference processing.
    • This versatile AI processor offers general-purpose DNN acceleration, empowering customers with the flexibility and configurability to optimize performance for their specific PPA targets.
    • A3000 also supports high-precision inference, reducing CPU workload and memory bandwidth.
    Block Diagram -- High-Performance NPU
  • LLM Accelerator IP for Multimodal, Agentic Intelligence
    • HyperThought is a cutting-edge LLM accelerator IP designed to revolutionize AI applications.
    • Built for the demands of multimodal and agentic intelligence, HyperThought delivers unparalleled performance, efficiency, and security.
    Block Diagram -- LLM Accelerator IP for Multimodal, Agentic Intelligence
  • AI IP Core
    • The low-power and high-perFormance Al IP developed by DeepMentor integrates the SOC of RISC-V. Customers can quickly integrate a unique combination oF silicon intellectual property into an Al SOC chip.
    • System manufacturers do not need to worry about the problems of Al soFtware integration and system development, and can immediately have unique AI products in the market
    Block Diagram -- AI IP Core
  • High-Performance Memory Expansion IP for AI Accelerators
    • Expand Effective HBM Capacity by up to 50%
    • Enhance AI Accelerator Throughput
    • Boost Effective HBM Bandwidth
    • Integrated Address Translation and memory management:
    Block Diagram -- High-Performance Memory Expansion IP for AI Accelerators
  • Fully-coherent RISC-V Tensor Unit
    • The bulk of computations in Large Language Models (LLMs) is in fully-connected layers that can be efficiently implemented as matrix multiplication.
    • The Tensor Unit provides hardware specifically tailored to matrix multiplication workloads, resulting in a huge performance boost for AI without a big power consumption.
    Block Diagram -- Fully-coherent RISC-V Tensor Unit
  • IP library for the acceleration of edge AI/ML
    • A library with a wide selection of hardware IPs for the design of modular and flexible SoCs that enable end-to-end inference on miniaturized systems.
    • Available IP categories include ML accelerators, dedicated memory systems, the RISC-V based 32-bit processor core icyflex-V, and peripherals.
    Block Diagram -- IP library for the acceleration of edge AI/ML
  • Vision AI DSP
    • Ceva-SensPro is a family of DSP cores architected to combine vision, Radar, and AI processing in a single architecture.
    • The silicon-proven cores provide scalable performance to cover a wide range of applications that combine vision processing, Radar/LiDAR processing, and AI inferencing to interpret their surroundings. These include automotive, robotics, surveillance, AR/VR, mobile devices, and smart homes.
    Block Diagram -- Vision AI DSP
×
Semiconductor IP