AI accelerator IP

Filter
Filter

Login required.

Sign in

Login required.

Sign in

Login required.

Sign in

Compare 63 IP from 33 vendors (1 - 10)
  • AI accelerator
    • Massive Floating Point (FP) Parallelism: To handle extensive computations simultaneously.
    • Optimized Memory Bandwidth Utilization: Ensuring peak efficiency in data handling.
    Block Diagram -- AI accelerator
  • AI Accelerator Specifically for CNN
    • A specialized hardware with controlled throughput and hardware cost/resources, utilizing parameterizeable layers, configurable weights, and precision settings to support fixed-point operations.
    • This hardware aim to accelerate inference operations, particulary for CNNs such as LeNet-5, VGG-16, VGG-19, AlexNet, ResNet-50, etc.
    Block Diagram -- AI Accelerator Specifically for CNN
  • Low power AI accelerator
    • Complete speech processing at less than 100W
    • Able to run time series nerworks for signal and speech
    • 10X more efficient than traditional NNs
  • Performance AI Accelerator for Edge Computing
    • Up to 16 TOPS
    • Up to 16 MB Local Memory
    • RISC-V/Arm Cortex-R or A 32-bit CPU
    • 3 x AXI4, 128b (Host, CPU & Data)
    Block Diagram -- Performance AI Accelerator for Edge Computing
  • Performance Efficiency AI Accelerator
    • Up to 6 TOPS
    • Up to 6 MB Local Memory
    • RISC-V/Arm Cortex-M or A 32-bit CPU
    • 3 x AXI4, 128b (Host, CPU & Data)
    Block Diagram -- Performance Efficiency AI Accelerator
  • Lowest Power and Cost End Point AI Accelerator
    • Up to 1 TOPS
    • Up to 1 MB Local Memory
    • RISC-V/Arm Cortex-M 32-bit CPU
    • 3 x AXI4, 128b (Host, CPU & Data)
    Block Diagram -- Lowest Power and Cost End Point AI Accelerator
  • AI Processor Accelerator
    • Universal Compatibility: Supports any framework, neural network, and backbone.
    • Large Input Frame Handling: Accommodates large input frames without downsizing.
    Block Diagram -- AI Processor Accelerator
  • High-Performance Memory Expansion IP for AI Accelerators
    • Expand Effective HBM Capacity by up to 50%
    • Enhance AI Accelerator Throughput
    • Boost Effective HBM Bandwidth
    • Integrated Address Translation and memory management:
    Block Diagram -- High-Performance Memory Expansion IP for AI Accelerators
  • AI DSA Processor - 9-Stage Pipeline, Dual-issue
    • NI900 is a DSA processor based on 900 Series.
    • NI900 is optimized with features specifically targeting AI applications.
    Block Diagram -- AI DSA Processor - 9-Stage Pipeline, Dual-issue
  • 224G SerDes PHY and controller for UALink for AI systems
    • UALink, the standard for AI accelerator interconnects, facilitates this scalability by providing low-latency, high-bandwidth communication.
    • As a member of the UALink Consortium, Cadence offers verified UALink IP subsystems, including controllers and silicon-proven PHYs, optimized for robust performance in both short and long-reach applications and delivering industry-leading power, performance, and area (PPA).
    Block Diagram -- 224G SerDes PHY and controller for UALink for AI systems
×
Semiconductor IP