NPU Processor IP Cores

NPU (Neural Processing Unit) Processor IP cores provide high-performance computing power for tasks such as image recognition, natural language processing, and data analysis, enabling real-time AI processing at the edge.

All offers in NPU Processor IP Cores
Filter
Filter

Login required.

Sign in

Compare 40 NPU Processor IP Cores from 10 vendors (1 - 10)
  • NPU IP Core for Edge
    • Origin Evolution™ for Edge offers out-of-the-box compatibility with today's most popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of networks and representations.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Edge scales to 32 TFLOPS in a single core to address the most advanced edge inference needs.
    Block Diagram -- NPU IP Core for Edge
  • NPU IP Core for Data Center
    • Origin Evolution™ for Data Center offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks. Featuring a hardware and software co-designed architecture, Origin Evolution for Data Center scales to 128 TFLOPS in a single core, with multi-core performance to PetaFLOPs.
    Block Diagram -- NPU IP Core for Data Center
  • NPU IP Core for Mobile
    • Origin Evolution™ for Mobile offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Mobile scales to 64 TFLOPS in a single core.
    Block Diagram -- NPU IP Core for Mobile
  • NPU IP Core for Automotive
    • Origin Evolution™ for Automotive offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Automotive scales to 96 TFLOPS in a single core, with multi-core performance to PetaFLOPs.
    Block Diagram -- NPU IP Core for Automotive
  • Neural engine IP - AI Inference for the Highest Performing Systems
    • The Origin E8 is a family of NPU IP inference cores designed for the most performance-intensive applications, including automotive and data centers.
    • With its ability to run multiple networks concurrently with zero penalty context switching, the E8 excels when high performance, low latency, and efficient processor utilization are required.
    • Unlike other IPs that rely on tiling to scale performance—introducing associated power, memory sharing, and area penalties—the E8 offers single-core performance of up to 128 TOPS, delivering the computational capability required by the most advanced LLM and ADAS implementations.
    Block Diagram -- Neural engine IP - AI Inference for the Highest Performing Systems
  • High-Performance NPU
    • The ZIA™ A3000 AI processor IP is a low-power processor specifically designed for edge-side neural network inference processing.
    • This versatile AI processor offers general-purpose DNN acceleration, empowering customers with the flexibility and configurability to optimize performance for their specific PPA targets.
    • A3000 also supports high-precision inference, reducing CPU workload and memory bandwidth.
    Block Diagram -- High-Performance NPU
  • LLM Accelerator IP for Multimodal, Agentic Intelligence
    • HyperThought is a cutting-edge LLM accelerator IP designed to revolutionize AI applications.
    • Built for the demands of multimodal and agentic intelligence, HyperThought delivers unparalleled performance, efficiency, and security.
    Block Diagram -- LLM Accelerator IP for Multimodal, Agentic Intelligence
  • All-In-One RISC-V NPU
    • Optimized Neural Processing for Next-Generation Machine Learning with High-Efficiency and Scalable AI compute
    Block Diagram -- All-In-One RISC-V NPU
  • Neural engine IP - Balanced Performance for AI Inference
    • The Origin™ E2 is a family of power and area optimized NPU IP cores designed for devices like smartphones and edge nodes.
    • It supports video—with resolutions up to 4K and beyond— audio, and text-based neural networks, including public, custom, and proprietary networks.

     

    Block Diagram -- Neural engine IP - Balanced Performance for AI Inference
  • Neural engine IP - The Cutting Edge in On-Device AI
    • The Origin E6 is a versatile NPU that is customized to match the needs of next-generation smartphones, automobiles, AV/VR, and consumer devices.
    • With support for video, audio, and text-based AI networks, including standard, custom, and proprietary networks, the E6 is the ideal hardware/software co-designed platform for chip architects and AI developers.
    • It offers broad native support for current and emerging AI models, and achieves ultra-efficient workload scheduling and memory management, with up to 90% processor utilization—avoiding dark silicon waste.
    Block Diagram -- Neural engine IP - The Cutting Edge in On-Device AI
×
Semiconductor IP