CNN IP

Filter
Filter

Login required.

Sign in

Compare 37 IP from 16 vendors (1 - 10)
  • AI Accelerator Specifically for CNN
    • A specialized hardware with controlled throughput and hardware cost/resources, utilizing parameterizeable layers, configurable weights, and precision settings to support fixed-point operations.
    • This hardware aim to accelerate inference operations, particulary for CNNs such as LeNet-5, VGG-16, VGG-19, AlexNet, ResNet-50, etc.
    Block Diagram -- AI Accelerator Specifically for CNN
  • Convolutional Neural Network (CNN) Compact Accelerator
    • Support convolution layer, max pooling layer, batch normalization layer and full connect layer
    • Configurable bit width of weight (16 bit, 1 bit)
    Block Diagram -- Convolutional Neural Network (CNN) Compact Accelerator
  • IP library for the acceleration of edge AI/ML
    • A library with a wide selection of hardware IPs for the design of modular and flexible SoCs that enable end-to-end inference on miniaturized systems.
    • Available IP categories include ML accelerators, dedicated memory systems, the RISC-V based 32-bit processor core icyflex-V, and peripherals.
    Block Diagram -- IP library for the acceleration of edge AI/ML
  • Image Processing NPU IP
    • Highly optimized for CNN-based image processing application
    • Fully programmable processing core: Instruction level coding with Chips&Media proprietary Instruction Set Architecture (ISA)
    • 16-bit floating point arithmetic unit
    • Minimum bandwidth consumption
  • Highly scalable performance for classic and generative on-device and edge AI solutions
    • Flexible System Integration: The Neo NPUs can be integrated with any host processor to offload the AI portions of the application
    • Scalable Design and Configurability: The Neo NPUs support up to 80 TOPS with a single-core and are architected to enable multi-core solutions of 100s of TOPS
    • Efficient in Mapping State-of-the-Art AI/ML Workloads: Best-in-class performance for inferences per second with low latency and high throughput, optimized for achieving high performance within a low-energy profile for classic and generative AI
    • Industry-Leading Performance and Power Efficiency: High Inferences per second per area (IPS/mm2 and per power (IPS/W)
    Block Diagram -- Highly scalable performance for classic and generative on-device and edge AI solutions
  • High performance-efficient deep learning accelerator for edge and end-point inference
    • Configurable MACs from 32 to 4096 (INT8)
    • Maximum performance 8 TOPS at 1GHz
    • Configurable local memory: 16KB to 4MB
    Block Diagram -- High performance-efficient deep learning accelerator for edge and end-point inference
  • Neuromorphic Processor
    • Neural Processing Unit (NPU) at memory compute architecture implementing Integrate and fire neuron.
    • Emulate multiple neurons with configurable Synapses.
    • Compute only when events occur.
    • Up-to 4 bits for weights and activation.
    Block Diagram -- Neuromorphic Processor
  • IP cores for ultra-low power AI-enabled devices
    • Ultra-fast Response Time
    • Zero-latency Switching
    • Low Power
    Block Diagram -- IP cores for ultra-low power AI-enabled devices
  • Sensor Fusion IP
    • Kalman Filter
    • Extended Kalman Filter
    • CNN
  • Machine Learning Processor
    • Partner Configurable
    • Extremely Small Area
    • Single Toolchain
×
Semiconductor IP