NPU Processor IP Cores

NPU (Neural Processing Unit) Processor IP cores provide high-performance computing power for tasks such as image recognition, natural language processing, and data analysis, enabling real-time AI processing at the edge.

All offers in NPU Processor IP Cores
Filter
Filter

Login required.

Sign in

Compare 34 NPU Processor IP Cores from 8 vendors (1 - 10)
  • Neural engine IP - AI Inference for the Highest Performing Systems
    • The Origin E8 is a family of NPU IP inference cores designed for the most performance-intensive applications, including automotive and data centers.
    • With its ability to run multiple networks concurrently with zero penalty context switching, the E8 excels when high performance, low latency, and efficient processor utilization are required.
    • Unlike other IPs that rely on tiling to scale performance—introducing associated power, memory sharing, and area penalties—the E8 offers single-core performance of up to 128 TOPS, delivering the computational capability required by the most advanced LLM and ADAS implementations.
    Block Diagram -- Neural engine IP - AI Inference for the Highest Performing Systems
  • Neural engine IP - Balanced Performance for AI Inference
    • The Origin™ E2 is a family of power and area optimized NPU IP cores designed for devices like smartphones and edge nodes.
    • It supports video—with resolutions up to 4K and beyond— audio, and text-based neural networks, including public, custom, and proprietary networks.

     

    Block Diagram -- Neural engine IP - Balanced Performance for AI Inference
  • Neural engine IP - The Cutting Edge in On-Device AI
    • The Origin E6 is a versatile NPU that is customized to match the needs of next-generation smartphones, automobiles, AV/VR, and consumer devices.
    • With support for video, audio, and text-based AI networks, including standard, custom, and proprietary networks, the E6 is the ideal hardware/software co-designed platform for chip architects and AI developers.
    • It offers broad native support for current and emerging AI models, and achieves ultra-efficient workload scheduling and memory management, with up to 90% processor utilization—avoiding dark silicon waste.
    Block Diagram -- Neural engine IP - The Cutting Edge in On-Device AI
  • Highly scalable inference NPU IP for next-gen AI applications
    • ENLIGHT Pro is meticulously engineered to deliver enhanced flexibility, scalability, and configurability, enhancing overall efficiency in a compact footprint.
    • ENLIGHT Pro supports the transformer model, a key requirement in modern AI applications, particularly Large Language Models (LLMs). LLMs are instrumental in tasks such as text recognition and generation, trained using deep learning techniques on extensive datasets.
    Block Diagram -- Highly scalable inference NPU IP for next-gen AI applications
  • 4-/8-bit mixed-precision NPU IP
    • Features a highly optimized network model compiler that reduces DRAM traffic from intermediate activation data by grouped layer partitioning and scheduling.
    • ENLIGHT is easy to customize to different core sizes and performance for customers' targeted market applications and achieves significant efficiencies in size, power, performance, and DRAM bandwidth, based on the industry's first adoption of 4-/8-bit mixed-quantization. 
    Block Diagram -- 4-/8-bit mixed-precision NPU IP
  • NPU IP for Embedded ML
    • Fully programmable to efficiently execute Neural Networks, feature extraction, signal processing, audio and control code
    • Scalable performance by design to meet wide range of use cases with MAC configurations with up to 64 int8 (native 128 of 4x8) MACs per cycle
    • Future proof architecture that supports the most advanced ML data types and operators
    Block Diagram -- NPU IP for Embedded ML
  • Scalable Edge NPU IP for Generative AI
    • Ceva-NeuPro-M is a scalable NPU architecture, ideal for transformers, Vision Transformers (ViT), and generative AI applications, with an exceptional power efficiency of up to 3500 Tokens-per-Second/Watt for a Llama 2 and 3.2 models
    • The Ceva-NeuPro-M Neural Processing Unit (NPU) IP family delivers exceptional energy efficiency tailored for edge computing while offering scalable performance to handle AI models with over a billion parameters.
    Block Diagram -- Scalable Edge NPU IP for Generative AI
  • General Purpose Neural Processing Unit (NPU)
    • Hybrid Von Neuman + 2D SIMD matrix architecture
    • 64b Instruction word, single instruction issue per clock
    • 7-stage, in-order pipeline
    Block Diagram -- General Purpose Neural Processing Unit (NPU)
  • Enhanced Neural Processing Unit for safety providing 98,304 MACs/cycle of performance for AI applications
    • Adds hardware safety features to NPX6 NPU, minimizing area and power impact
    • Supports ISO 26262 automotive safety standard
    • Supports CNNs, transformers, including generative AI, recommender networks, RNNs/LSTMs, etc
    • IP targets ASIL B and ASIL D compliance to ISO 26262
    Block Diagram -- Enhanced Neural Processing Unit for safety providing 98,304 MACs/cycle of performance for AI applications
  • Enhanced Neural Processing Unit for safety providing 8,192 MACs/cycle of performance for AI applications
    • Adds hardware safety features to NPX6 NPU, minimizing area and power impact
    • Supports ISO 26262 automotive safety standard
    • Supports CNNs, transformers, including generative AI, recommender networks, RNNs/LSTMs, etc
    • IP targets ASIL B and ASIL D compliance to ISO 26262
    Block Diagram -- Enhanced Neural Processing Unit for safety providing 8,192 MACs/cycle of performance for AI applications
×
Semiconductor IP