Deep Learning Processor IP

Filter
Filter
Compare 15 IP from 14 vendors (1 - 10)
  • Unified Deep Learning Processor
    • Unified deep learning/vision/video architecture enables flexibility
    • Low power extends battery life and prevents overheating
    • Single scalable architecture
  • DPU for Convolutional Neural Network
    • Configurable hardware architecture
    • Configurable core number up to three
    • Convolution and deconvolution
  • Imaging and Computer Vision Processor
    • Superior performance
    • Low power consumption
    • Flexible and scalable
    Block Diagram -- Imaging and Computer Vision Processor
  • Prodigy IoT/Edge Licensable Hardware IP
    • TPU AI/ML Inference IP Architecture
  • Low-power high-speed reconfigurable processor to accelerate AI everywhere.
    • Multi-Core Number: 4
    • Performance (INT8, 600MHz): 0.6TOPS
    • Achievable Clock Speed (MHz): 600 (28nm)
    • Synthesis Logic Gates (MGates): 2
    Block Diagram -- Low-power high-speed reconfigurable processor to accelerate AI everywhere.
  • AI inference processor IP
    • High Performance, Low Power Consumption, Small Foot Print IP for Deep Learning inference processing.
    Block Diagram -- AI inference processor IP
  • High Performance Scalable Sensor Hub DSP Architecture
    • Self contained, specialized sensor hub on-device processor
    • Unifies multi-sensor processing with AI and sensor fusion
    • Highy-configurable 8-way VLIW architecture
    Block Diagram -- High Performance Scalable Sensor Hub DSP Architecture
  • Network Security Crypto Accelerator
    • Scalable architecture & crypto engines for optimal performance/resource usage
    • Configurable for perfect application fit
    • 100% CPU offload with low latency and high throughput
    Block Diagram -- Network Security Crypto Accelerator
  • Highly scalable inference NPU IP for next-gen AI applications
    • Matrix Multiplication: 4096 MACs/cycles (int 8), 1024 MACs/cycles (int 16)
    • Vector processor: RISC-V with RVV 1.0
    • Custom instructions for softmax and local storage access
    Block Diagram -- Highly scalable inference NPU IP for next-gen AI applications
  • ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications
    • ARC processor cores are optimized to deliver the best performance/power/area (PPA) efficiency in the industry for embedded SoCs. Designed from the start for power-sensitive embedded applications, ARC processors implement a Harvard architecture for higher performance through simultaneous instruction and data memory access, and a high-speed scalar pipeline for maximum power efficiency. The 32-bit RISC engine offers a mixed 16-bit/32-bit instruction set for greater code density in embedded systems.
    • ARC's high degree of configurability and instruction set architecture (ISA) extensibility contribute to its best-in-class PPA efficiency. Designers have the ability to add or omit hardware features to optimize the core's PPA for their target application - no wasted gates. ARC users also have the ability to add their own custom instructions and hardware accelerators to the core, as well as tightly couple memory and peripherals, enabling dramatic improvements in performance and power-efficiency at both the processor and system levels.
    • Complete and proven commercial and open source tool chains, optimized for ARC processors, give SoC designers the development environment they need to efficiently develop ARC-based systems that meet all of their PPA targets.
×
Semiconductor IP