Processor IP

Welcome to the ultimate Processor IP hub!

Our vast directory of Processor IP cores include AI Processor IP, GPU IP, NPU IP, DSP IP, Arm Processor, RISC-V Processor and much more.

All offers in Processor IP
Filter
Filter

Login required.

Sign in

Login required.

Sign in

Login required.

Sign in

Compare 691 Processor IP from 121 vendors (1 - 10)
  • Multi-format video decoder IP Core
    • Unified architecture supports multiple formats
    • VVC main profile 8/10bit
    • AV1 main profile 8/10bit
    • H.264 up to constraint high10 profile
    Block Diagram -- Multi-format video decoder IP Core
  • Neural engine IP - AI Inference for the Highest Performing Systems
    • The Origin E8 is a family of NPU IP inference cores designed for the most performance-intensive applications, including automotive and data centers.
    • With its ability to run multiple networks concurrently with zero penalty context switching, the E8 excels when high performance, low latency, and efficient processor utilization are required.
    • Unlike other IPs that rely on tiling to scale performance—introducing associated power, memory sharing, and area penalties—the E8 offers single-core performance of up to 128 TOPS, delivering the computational capability required by the most advanced LLM and ADAS implementations.
    Block Diagram -- Neural engine IP - AI Inference for the Highest Performing Systems
  • Neural engine IP - Balanced Performance for AI Inference
    • The Origin™ E2 is a family of power and area optimized NPU IP cores designed for devices like smartphones and edge nodes.
    • It supports video—with resolutions up to 4K and beyond— audio, and text-based neural networks, including public, custom, and proprietary networks.

     

    Block Diagram -- Neural engine IP - Balanced Performance for AI Inference
  • Neural engine IP - The Cutting Edge in On-Device AI
    • The Origin E6 is a versatile NPU that is customized to match the needs of next-generation smartphones, automobiles, AV/VR, and consumer devices.
    • With support for video, audio, and text-based AI networks, including standard, custom, and proprietary networks, the E6 is the ideal hardware/software co-designed platform for chip architects and AI developers.
    • It offers broad native support for current and emerging AI models, and achieves ultra-efficient workload scheduling and memory management, with up to 90% processor utilization—avoiding dark silicon waste.
    Block Diagram -- Neural engine IP - The Cutting Edge in On-Device AI
  • Neural engine IP - Tiny and Mighty
    • The Origin E1 NPUs are individually customized to various neural networks commonly deployed in edge devices, including home appliances, smartphones, and security cameras.
    • For products like these that require dedicated AI processing that minimizes power consumption, silicon area, and system cost, E1 cores offer the lowest power consumption and area in a 1 TOPS engine.
    Block Diagram -- Neural engine IP - Tiny and Mighty
  • Fully-coherent RISC-V Tensor Unit
    • The bulk of computations in Large Language Models (LLMs) is in fully-connected layers that can be efficiently implemented as matrix multiplication.
    • The Tensor Unit provides hardware specifically tailored to matrix multiplication workloads, resulting in a huge performance boost for AI without a big power consumption.
    Block Diagram -- Fully-coherent RISC-V Tensor Unit
  • 64-bit in-order RISC-V Customisable IP Core
    • Ready for the most demanding workloads, Avispado supports large memory capacities with its 64-bit native data path. With its complete MMU support, Avispado is also Linux-ready, including multiprocessing.
    Block Diagram -- 64-bit in-order RISC-V Customisable IP Core
  • 64-bit Out-of-Order RISC-V Customisable IP Core
    • Ready for the most demanding workloads, Atrevido supports large memory capacities with its 64-bit native data path. With its complete MMU support, Atrevido is also Linux-ready, including multiprocessing.
    Block Diagram -- 64-bit Out-of-Order RISC-V Customisable IP Core
  • Highly scalable inference NPU IP for next-gen AI applications
    • ENLIGHT Pro is meticulously engineered to deliver enhanced flexibility, scalability, and configurability, enhancing overall efficiency in a compact footprint.
    • ENLIGHT Pro supports the transformer model, a key requirement in modern AI applications, particularly Large Language Models (LLMs). LLMs are instrumental in tasks such as text recognition and generation, trained using deep learning techniques on extensive datasets.
    Block Diagram -- Highly scalable inference NPU IP for next-gen AI applications
  • 4-/8-bit mixed-precision NPU IP
    • Features a highly optimized network model compiler that reduces DRAM traffic from intermediate activation data by grouped layer partitioning and scheduling.
    • ENLIGHT is easy to customize to different core sizes and performance for customers' targeted market applications and achieves significant efficiencies in size, power, performance, and DRAM bandwidth, based on the industry's first adoption of 4-/8-bit mixed-quantization. 
    Block Diagram -- 4-/8-bit mixed-precision NPU IP
×
Semiconductor IP