LLM Accelerator IP

Filter
Filter

Login required.

Sign in

Compare 4 IP from 4 vendors (1 - 4)
  • NPU / AI accelerator with emphasis in LLM
    • Programmable and Model-flexible
    • Ecosystem Ready
  • AI accelerator
    • Massive Floating Point (FP) Parallelism: To handle extensive computations simultaneously.
    • Optimized Memory Bandwidth Utilization: Ensuring peak efficiency in data handling.
    Block Diagram -- AI accelerator
  • High-Performance Memory Expansion IP for AI Accelerators
    • Expand Effective HBM Capacity by up to 50%
    • Enhance AI Accelerator Throughput
    • Boost Effective HBM Bandwidth
    • Integrated Address Translation and memory management:
    Block Diagram -- High-Performance Memory Expansion IP for AI Accelerators
  • Neural engine IP - AI Inference for the Highest Performing Systems
    • The Origin E8 is a family of NPU IP inference cores designed for the most performance-intensive applications, including automotive and data centers.
    • With its ability to run multiple networks concurrently with zero penalty context switching, the E8 excels when high performance, low latency, and efficient processor utilization are required.
    • Unlike other IPs that rely on tiling to scale performance—introducing associated power, memory sharing, and area penalties—the E8 offers single-core performance of up to 128 TOPS, delivering the computational capability required by the most advanced LLM and ADAS implementations.
    Block Diagram -- Neural engine IP - AI Inference for the Highest Performing Systems
×
Semiconductor IP