NPU Processor IP Cores

NPU (Neural Processing Unit) Processor IP cores provide high-performance computing power for tasks such as image recognition, natural language processing, and data analysis, enabling real-time AI processing at the edge.

All offers in NPU Processor IP Cores
Filter
Filter

Login required.

Sign in

Compare 46 NPU Processor IP Cores from 12 vendors (1 - 10)
  • RISC-V-Based, Open Source AI Accelerator for the Edge
    • Coral NPU is a machine learning (ML) accelerator core designed for energy-efficient AI at the edge.
    • Based on the open hardware RISC-V ISA, it is available as validated open source IP, for commercial silicon integration.
    Block Diagram -- RISC-V-Based, Open Source AI Accelerator for the Edge
  • GPNPU Processor IP - 32 to 864TOPs
    • 32 to 864TOPs
    • (Dual, Quad, Octo Core) Up to 256K MACs
    • Hybrid Von Neuman + 2D SIMD matrix architecture
    • 64b Instruction word, single instruction issue per clock
    • 7-stage, in-order pipeline
    • Scalar / vector / matrix instructions modelessly intermixed with granular predication
    Block Diagram -- GPNPU Processor IP - 32 to 864TOPs
  • GPNPU Processor IP - 16 to 108 TOPs
    • 16 to 108 TOPs
    • 8K / 16K / 32K MACs plus 1024 ALUs
    Block Diagram -- GPNPU Processor IP - 16 to 108 TOPs
  • GPNPU Processor IP - 1 to 7 TOPs
    • 1 to 7TOPs
    • 512/ 1K/ 2K/ 8K MACs plus 64 ALUs
    Block Diagram -- GPNPU Processor IP - 1 to 7 TOPs
  • GPNPU Processor IP - 4 to 28 TOPs
    • 4 to 28 TOPs
    • 2K/ 4K/ 8K MACs plus 256 ALUs
    Block Diagram -- GPNPU Processor IP - 4 to 28 TOPs
  • NPU IP Core for Mobile
    • Origin Evolution™ for Mobile offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Mobile scales to 64 TFLOPS in a single core.
    Block Diagram -- NPU IP Core for Mobile
  • Specialized Video Processing NPU IP
    • Highly optimized for CNN-based image processing application
    • Fully programmable processing core: Instruction level coding with Chips&Media proprietary Instruction Set Architecture (ISA)
    • 16-bit floating point arithmetic unit
    • Minimum bandwidth consumption
    Block Diagram -- Specialized Video Processing NPU IP
  • NPU IP Core for Edge
    • Origin Evolution™ for Edge offers out-of-the-box compatibility with today's most popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of networks and representations.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Edge scales to 32 TFLOPS in a single core to address the most advanced edge inference needs.
    Block Diagram -- NPU IP Core for Edge
  • NPU IP Core for Data Center
    • Origin Evolution™ for Data Center offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks. Featuring a hardware and software co-designed architecture, Origin Evolution for Data Center scales to 128 TFLOPS in a single core, with multi-core performance to PetaFLOPs.
    Block Diagram -- NPU IP Core for Data Center
  • NPU IP Core for Automotive
    • Origin Evolution™ for Automotive offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Automotive scales to 96 TFLOPS in a single core, with multi-core performance to PetaFLOPs.
    Block Diagram -- NPU IP Core for Automotive
×
Semiconductor IP