NPU Processor IP Cores

NPU (Neural Processing Unit) Processor IP cores provide high-performance computing power for tasks such as image recognition, natural language processing, and data analysis, enabling real-time AI processing at the edge.

All offers in NPU Processor IP Cores
Filter
Filter

Login required.

Sign in

Compare 44 NPU Processor IP Cores from 10 vendors (1 - 10)
  • GPNPU Processor IP - 32 to 864TOPs
    • 32 to 864TOPs
    • (Dual, Quad, Octo Core) Up to 256K MACs
    • Hybrid Von Neuman + 2D SIMD matrix architecture
    • 64b Instruction word, single instruction issue per clock
    • 7-stage, in-order pipeline
    • Scalar / vector / matrix instructions modelessly intermixed with granular predication
    Block Diagram -- GPNPU Processor IP - 32 to 864TOPs
  • GPNPU Processor IP - 16 to 108 TOPs
    • 16 to 108 TOPs
    • 8K / 16K / 32K MACs plus 1024 ALUs
    Block Diagram -- GPNPU Processor IP - 16 to 108 TOPs
  • GPNPU Processor IP - 1 to 7 TOPs
    • 1 to 7TOPs
    • 512/ 1K/ 2K/ 8K MACs plus 64 ALUs
    Block Diagram -- GPNPU Processor IP - 1 to 7 TOPs
  • GPNPU Processor IP - 4 to 28 TOPs
    • 4 to 28 TOPs
    • 2K/ 4K/ 8K MACs plus 256 ALUs
    Block Diagram -- GPNPU Processor IP - 4 to 28 TOPs
  • NPU IP Core for Mobile
    • Origin Evolution™ for Mobile offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Mobile scales to 64 TFLOPS in a single core.
    Block Diagram -- NPU IP Core for Mobile
  • Specialized Video Processing NPU IP
    • Highly optimized for CNN-based image processing application
    • Fully programmable processing core: Instruction level coding with Chips&Media proprietary Instruction Set Architecture (ISA)
    • 16-bit floating point arithmetic unit
    • Minimum bandwidth consumption
    Block Diagram -- Specialized Video Processing NPU IP
  • NPU IP Core for Edge
    • Origin Evolution™ for Edge offers out-of-the-box compatibility with today's most popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of networks and representations.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Edge scales to 32 TFLOPS in a single core to address the most advanced edge inference needs.
    Block Diagram -- NPU IP Core for Edge
  • NPU IP Core for Data Center
    • Origin Evolution™ for Data Center offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks. Featuring a hardware and software co-designed architecture, Origin Evolution for Data Center scales to 128 TFLOPS in a single core, with multi-core performance to PetaFLOPs.
    Block Diagram -- NPU IP Core for Data Center
  • NPU IP Core for Automotive
    • Origin Evolution™ for Automotive offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Automotive scales to 96 TFLOPS in a single core, with multi-core performance to PetaFLOPs.
    Block Diagram -- NPU IP Core for Automotive
  • Neural engine IP - AI Inference for the Highest Performing Systems
    • The Origin E8 is a family of NPU IP inference cores designed for the most performance-intensive applications, including automotive and data centers.
    • With its ability to run multiple networks concurrently with zero penalty context switching, the E8 excels when high performance, low latency, and efficient processor utilization are required.
    • Unlike other IPs that rely on tiling to scale performance—introducing associated power, memory sharing, and area penalties—the E8 offers single-core performance of up to 128 TOPS, delivering the computational capability required by the most advanced LLM and ADAS implementations.
    Block Diagram -- Neural engine IP - AI Inference for the Highest Performing Systems
×
Semiconductor IP