NPU AI processor IP

Filter
Filter

Login required.

Sign in

Login required.

Sign in

Login required.

Sign in

Compare 43 IP from 12 vendors (1 - 10)
  • AI DSA Processor - 9-Stage Pipeline, Dual-issue
    • NI900 is a DSA processor based on 900 Series.
    • NI900 is optimized with features specifically targeting AI applications.
    Block Diagram -- AI DSA Processor - 9-Stage Pipeline, Dual-issue
  • NPU IP for AI Vision and AI Voice
    • 128-bit vector processing unit (shader + ext)
    • OpenCL 3.0 shader instruction set
    • Enhanced vision instruction set (EVIS)
    • INT 8/16/32b, Float 16/32b
    Block Diagram -- NPU IP for AI Vision and AI Voice
  • Highly scalable inference NPU IP for next-gen AI applications
    • ENLIGHT Pro is meticulously engineered to deliver enhanced flexibility, scalability, and configurability, enhancing overall efficiency in a compact footprint.
    • ENLIGHT Pro supports the transformer model, a key requirement in modern AI applications, particularly Large Language Models (LLMs). LLMs are instrumental in tasks such as text recognition and generation, trained using deep learning techniques on extensive datasets.
    Block Diagram -- Highly scalable inference NPU IP for next-gen AI applications
  • Scalable Edge NPU IP for Generative AI
    • Ceva-NeuPro-M is a scalable NPU architecture, ideal for transformers, Vision Transformers (ViT), and generative AI applications, with an exceptional power efficiency of up to 3500 Tokens-per-Second/Watt for a Llama 2 and 3.2 models
    • The Ceva-NeuPro-M Neural Processing Unit (NPU) IP family delivers exceptional energy efficiency tailored for edge computing while offering scalable performance to handle AI models with over a billion parameters.
    Block Diagram -- Scalable Edge NPU IP for Generative AI
  • Neural network processor designed for edge devices
    • High energy efficiency
    • Support mainstream deep learning frameworks
    • Low power consumption
    • An integrated AI solution
  • ARC NPX Neural Processing Unit (NPU) IP supports the latest, most complex neural network models and addresses demands for real-time compute with ultra-low power consumption for AI applications
    • ARC processor cores are optimized to deliver the best performance/power/area (PPA) efficiency in the industry for embedded SoCs. Designed from the start for power-sensitive embedded applications, ARC processors implement a Harvard architecture for higher performance through simultaneous instruction and data memory access, and a high-speed scalar pipeline for maximum power efficiency. The 32-bit RISC engine offers a mixed 16-bit/32-bit instruction set for greater code density in embedded systems.
    • ARC's high degree of configurability and instruction set architecture (ISA) extensibility contribute to its best-in-class PPA efficiency. Designers have the ability to add or omit hardware features to optimize the core's PPA for their target application - no wasted gates. ARC users also have the ability to add their own custom instructions and hardware accelerators to the core, as well as tightly couple memory and peripherals, enabling dramatic improvements in performance and power-efficiency at both the processor and system levels.
    • Complete and proven commercial and open source tool chains, optimized for ARC processors, give SoC designers the development environment they need to efficiently develop ARC-based systems that meet all of their PPA targets.
  • Safety Enhanced GPNPU Processor IP
    • A True SDV Solution
    • Fully programmable – ideal for long product life cycles
    • Scalable multicore solutions up to 864 TOPS
    • Solutions for ADAS, IVI and ECU products
  • NPU IP Core for Edge
    • Origin Evolution™ for Edge offers out-of-the-box compatibility with today's most popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of networks and representations.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Edge scales to 32 TFLOPS in a single core to address the most advanced edge inference needs.
    Block Diagram -- NPU IP Core for Edge
  • NPU IP Core for Mobile
    • Origin Evolution™ for Mobile offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks.
    • Featuring a hardware and software co-designed architecture, Origin Evolution for Mobile scales to 64 TFLOPS in a single core.
    Block Diagram -- NPU IP Core for Mobile
  • NPU IP Core for Data Center
    • Origin Evolution™ for Data Center offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks. Featuring a hardware and software co-designed architecture, Origin Evolution for Data Center scales to 128 TFLOPS in a single core, with multi-core performance to PetaFLOPs.
    Block Diagram -- NPU IP Core for Data Center
×
Semiconductor IP