Processor IP
Welcome to the ultimate Processor IP hub!
Our vast directory of Processor IP cores include AI Processor IP, GPU IP, NPU IP, DSP IP, Arm Processor, RISC-V Processor and much more.
All offers in
Processor IP
Filter
Compare
687
Processor IP
from 121 vendors
(1
-
10)
-
Neuromorphic Processor IP (Second Generation)
- Supports 8-, 4-, and 1-bit weights and activations
- Programmable Activation Functions
- Skip Connections
- Support for Spatio-Temporal and Temporal Event-Based Neural Network
-
Neuromorphic Processor IP
- Supports 4-, 2-, and 1-bit weights and activations
- Supports multiple layers simultaneously
- Convolutional Neural Processor (CNP) and
- Fully-connected Neural Processor (FNP)
-
NPU IP Core for Edge
- Origin Evolution™ for Edge offers out-of-the-box compatibility with today's most popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of networks and representations.
- Featuring a hardware and software co-designed architecture, Origin Evolution for Edge scales to 32 TFLOPS in a single core to address the most advanced edge inference needs.
-
NPU IP Core for Data Center
- Origin Evolution™ for Data Center offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks. Featuring a hardware and software co-designed architecture, Origin Evolution for Data Center scales to 128 TFLOPS in a single core, with multi-core performance to PetaFLOPs.
-
NPU IP Core for Mobile
- Origin Evolution™ for Mobile offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks.
- Featuring a hardware and software co-designed architecture, Origin Evolution for Mobile scales to 64 TFLOPS in a single core.
-
NPU IP Core for Automotive
- Origin Evolution™ for Automotive offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks.
- Featuring a hardware and software co-designed architecture, Origin Evolution for Automotive scales to 96 TFLOPS in a single core, with multi-core performance to PetaFLOPs.
-
Neural engine IP - AI Inference for the Highest Performing Systems
- The Origin E8 is a family of NPU IP inference cores designed for the most performance-intensive applications, including automotive and data centers.
- With its ability to run multiple networks concurrently with zero penalty context switching, the E8 excels when high performance, low latency, and efficient processor utilization are required.
- Unlike other IPs that rely on tiling to scale performance—introducing associated power, memory sharing, and area penalties—the E8 offers single-core performance of up to 128 TOPS, delivering the computational capability required by the most advanced LLM and ADAS implementations.
-
Neural engine IP - Tiny and Mighty
- The Origin E1 NPUs are individually customized to various neural networks commonly deployed in edge devices, including home appliances, smartphones, and security cameras.
- For products like these that require dedicated AI processing that minimizes power consumption, silicon area, and system cost, E1 cores offer the lowest power consumption and area in a 1 TOPS engine.
-
High-Performance NPU
- The ZIA™ A3000 AI processor IP is a low-power processor specifically designed for edge-side neural network inference processing.
- This versatile AI processor offers general-purpose DNN acceleration, empowering customers with the flexibility and configurability to optimize performance for their specific PPA targets.
- A3000 also supports high-precision inference, reducing CPU workload and memory bandwidth.
-
Parallel Processing Unit
- The Parallel Processing Unit (PPU) is an IP block that integrates tightly with the CPU on the same silicon.
- It is designed to be highly configurable to specific meet requirements of numerous use cases.