AI Accelerator IP

Filter
Filter

Login required.

Sign in

Login required.

Sign in

Compare 41 IP from 23 vendors (1 - 10)
  • AI accelerator (NPU) IP - 1 to 20 TOPS
    • Performance efficient 18 TOPS/Watt
    • Scalable performance from 2-9K MACS
    • Capable of processing HD images on chip
    Block Diagram -- AI accelerator (NPU) IP - 1 to 20 TOPS
  • AI accelerator (NPU) IP - 32 to 128 TOPS
    • Performance efficient 18 TOPS/Watt
    • 36K-56K MACS
    • Multi-job support
    Block Diagram -- AI accelerator (NPU) IP - 32 to 128 TOPS
  • AI Accelerator (NPU) IP - 3.2 GOPS for Audio Applications
    • 3.2 GOPS
    • Ultra-low <300uW power consumption
    • Low latency
    Block Diagram -- AI Accelerator (NPU) IP - 3.2 GOPS for Audio Applications
  • AI Accelerator Specifically for CNN
    • A specialized hardware with controlled throughput and hardware cost/resources, utilizing parameterizeable layers, configurable weights, and precision settings to support fixed-point operations.
    • This hardware aim to accelerate inference operations, particulary for CNNs such as LeNet-5, VGG-16, VGG-19, AlexNet, ResNet-50, etc.
    Block Diagram -- AI Accelerator Specifically for CNN
  • Edge AI Accelerator NNE 1.0
    • Minimum efforts in system integration
    • Speed up AI inference performance
    • Super performance for power sensitive application
    Block Diagram -- Edge AI Accelerator NNE 1.0
  • AI Accelerator: Neural Network-specific Optimized 1 TOPS
    • Performance efficient 18 TOPS/Watt
    • Capable of processing real-time HD video and images on-chip
    • Advanced activation memory management
  • AI accelerator
    • Massive Floating Point (FP) Parallelism: To handle extensive computations simultaneously.
    • Optimized Memory Bandwidth Utilization: Ensuring peak efficiency in data handling. Our IP core’s design is fully parametrizable, allowing it to scale seamlessly and maximize efficiency based on the target architecture, thanks to our sophisticated scheduling and flow control logic.
  • Low power AI accelerator
    • Complete speech processing at less than 100W
    • Able to run time series nerworks for signal and speech
    • 10X more efficient than traditional NNs
  • AI Accelerator
    • Independent of external controller
    • Accelerates high dimensional tensors
    • Highly parallel with multi-tasking or multiple data sources
    • Optimized for performance / power / area
  • High-Performance Edge AI Accelerator
    • Performance: Up to 16 TOPs
    • MACs (8x8): 4K, 8K
    • Data Types: 1-bit, INT8, INT16
    Block Diagram -- High-Performance Edge AI Accelerator
×
Semiconductor IP