AI processor IP
Filter
Compare
150
IP
from 34 vendors
(1
-
10)
-
AI Processor Accelerator
- Universal Compatibility: Supports any framework, neural network, and backbone.
- Large Input Frame Handling: Accommodates large input frames without downsizing.
-
Powerful AI processor
- SiFive Intelligence Extensions for ML workloads
- 512-bit VLEN
- Performance benchmarks
- Built on silicon-proven U7-Series core
-
IP cores for ultra-low power AI-enabled devices
- Ultra-fast Response Time
- Zero-latency Switching
- Low Power
-
Open RAN Platform for Base Station and Radio
- Fully configurable comprehensive IP platform for 5G NR and multi-mode RAT addressing both Basestation and Radio Open RAN use cases
-
5G Baseband Platform IP for Mobile Broadband and IoT
- Fully configurable IP platform for 5G NR and multi-mode RAT.
- Dedicated and optimized Max configuration for eMBB and Lite configuration for emerging cellular IoT modems.
-
High-Performance NPU
- Low Power Consumption
- High Performance
- Flexibility and Configurability
- High-Precision Inference
-
High-performance AI dataflow processor with scalable vector compute capabilities
- Matrix Engine
- 4 X-Cores per cluster
- 1 Cluster = 16 TOPS (INT8)
-
High-performance 64-bit RISC-V architecture multi-core processor with AI vector acceleration engine
- Instruction set: RISC-V RV64GC/RV 64GCV;
- Multi-core: Isomorphic multi-core with 1 to 4 optional clusters. Each cluster can have 1 to 4 optional cores;
-
High-performance 32-bit multi-core processor with AI acceleration engine
- Instruction set: T-Head ISA (32-bit/16-bit variable-length instruction set);
- Multi-core: Isomorphic multi-core, with 1 to 4 optional cores;
- Pipeline: 12-stage;
- Microarchitecture: Tri-issue, deep out-of-order;
-
AI inference processor IP
- High Performance, Low Power Consumption, Small Foot Print IP for Deep Learning inference processing.