AI IP
Filter
Compare
760
IP
from
133
vendors
(1
-
10)
-
AI DSA Processor - 9-Stage Pipeline, Dual-issue
- NI900 is a DSA processor based on 900 Series.
- NI900 is optimized with features specifically targeting AI applications.
-
224G SerDes PHY and controller for UALink for AI systems
- UALink, the standard for AI accelerator interconnects, facilitates this scalability by providing low-latency, high-bandwidth communication.
- As a member of the UALink Consortium, Cadence offers verified UALink IP subsystems, including controllers and silicon-proven PHYs, optimized for robust performance in both short and long-reach applications and delivering industry-leading power, performance, and area (PPA).
-
RISC-V AI Acceleration Platform - Scalable, standards-aligned soft chiplet IP
- Built on RISC-V and delivered as soft chiplet IP, the Veyron E2X provides scalable, standards-based AI acceleration that customers can integrate and customize freely.
-
AI IP Core
- The low-power and high-perFormance Al IP developed by DeepMentor integrates the SOC of RISC-V. Customers can quickly integrate a unique combination oF silicon intellectual property into an Al SOC chip.
- System manufacturers do not need to worry about the problems of Al soFtware integration and system development, and can immediately have unique AI products in the market
-
AI SDK for Ceva-NeuPro NPUs
- Ceva-NeuPro Studio is a comprehensive software development environment designed to streamline the development and deployment of AI models on the Ceva-NeuPro NPUs.
- It offers a suite of tools optimized for the Ceva NPU architectures, providing network optimization, graph compilation, simulation, and emulation, ensuring that developers can train, import, optimize, and deploy AI models with highest efficiency and precision.
-
AI inference engine for real-time edge intelligence
- Flexible Models: Bring your physical AI application, open-source, or commercial model
- Easy Adoption: Based on open-specification RISC-V ISA for driving innovation and leveraging the broad community of open-source and commercial tools
- Scalable Design: Turnkey enablement for AI inference compute from 10’s to 1000’s of TOPS
-
High-Performance Memory Expansion IP for AI Accelerators
- Expand Effective HBM Capacity by up to 50%
- Enhance AI Accelerator Throughput
- Boost Effective HBM Bandwidth
- Integrated Address Translation and memory management:
-
Enhanced Neural Processing Unit for safety providing 98,304 MACs/cycle of performance for AI applications
- Adds hardware safety features to NPX6 NPU, minimizing area and power impact
- Supports ISO 26262 automotive safety standard
- Supports CNNs, transformers, including generative AI, recommender networks, RNNs/LSTMs, etc
- IP targets ASIL B and ASIL D compliance to ISO 26262
-
Enhanced Neural Processing Unit for safety providing 8,192 MACs/cycle of performance for AI applications
- Adds hardware safety features to NPX6 NPU, minimizing area and power impact
- Supports ISO 26262 automotive safety standard
- Supports CNNs, transformers, including generative AI, recommender networks, RNNs/LSTMs, etc
- IP targets ASIL B and ASIL D compliance to ISO 26262
-
Enhanced Neural Processing Unit for safety providing 65,536 MACs/cycle of performance for AI applications
- Adds hardware safety features to NPX6 NPU, minimizing area and power impact
- Supports ISO 26262 automotive safety standard
- Supports CNNs, transformers, including generative AI, recommender networks, RNNs/LSTMs, etc
- IP targets ASIL B and ASIL D compliance to ISO 26262