NPU Processor IP Cores

NPU (Neural Processing Unit) Processor IP cores provide high-performance computing power for tasks such as image recognition, natural language processing, and data analysis, enabling real-time AI processing at the edge.

All offers in NPU Processor IP Cores
Filter
Filter

Login required.

Sign in

Compare 50 NPU Processor IP Cores from 13 vendors (1 - 10)
  • Embedded AI accelerator IP
    • The GenAI IP is the smallest version of our NPU, tailored to small devices such as FPGAs and Adaptive SoCs, where the maximum Frequency is limited (<=250 MHz) and Memory Bandwidth is lower (<=100 GB/s).
    Block Diagram -- Embedded  AI accelerator IP
  • NPU IP for AI Vision and AI Voice
    • 128-bit vector processing unit (shader + ext)
    • OpenCL 3.0 shader instruction set
    • Enhanced vision instruction set (EVIS)
    • INT 8/16/32b, Float 16/32b
    Block Diagram -- NPU IP for AI Vision and AI Voice
  • NPU IP for Wearable and IoT Market
    • ML inference engine for deeply embedded system
      NN Engine
      Supports popular ML frameworks
      Support wide range of NN algorithms and flexible in layer ordering
    Block Diagram -- NPU IP for Wearable and IoT Market
  • NPU IP for Data Center and Automotive
    • 128-bit vector processing unit (shader + ext)
    • OpenCL 1.2 shader instruction set
    • Enhanced vision instruction set (EVIS)
    • INT 8/16/32b, Float 16/32b in PPU
    • Convolution layers
    Block Diagram -- NPU IP for Data Center and Automotive
  • RISC-V-Based, Open Source AI Accelerator for the Edge
    • Coral NPU is a machine learning (ML) accelerator core designed for energy-efficient AI at the edge.
    • Based on the open hardware RISC-V ISA, it is available as validated open source IP, for commercial silicon integration.
    Block Diagram -- RISC-V-Based, Open Source AI Accelerator for the Edge
  • GPNPU Processor IP - 32 to 864TOPs
    • 32 to 864TOPs
    • (Dual, Quad, Octo Core) Up to 256K MACs
    • Hybrid Von Neuman + 2D SIMD matrix architecture
    • 64b Instruction word, single instruction issue per clock
    • 7-stage, in-order pipeline
    • Scalar / vector / matrix instructions modelessly intermixed with granular predication
    Block Diagram -- GPNPU Processor IP - 32 to 864TOPs
  • GPNPU Processor IP - 16 to 108 TOPs
    • 16 to 108 TOPs
    • 8K / 16K / 32K MACs plus 1024 ALUs
    Block Diagram -- GPNPU Processor IP - 16 to 108 TOPs
  • GPNPU Processor IP - 1 to 7 TOPs
    • 1 to 7TOPs
    • 512/ 1K/ 2K/ 8K MACs plus 64 ALUs
    Block Diagram -- GPNPU Processor IP - 1 to 7 TOPs
  • GPNPU Processor IP - 4 to 28 TOPs
    • 4 to 28 TOPs
    • 2K/ 4K/ 8K MACs plus 256 ALUs
    Block Diagram -- GPNPU Processor IP - 4 to 28 TOPs
  • NPU IP Core for Mobile
    • Origin Evolution™ for Mobile offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Mobile scales to 64 TFLOPS in a single core.
    Block Diagram -- NPU IP Core for Mobile
×
Semiconductor IP