Vendor: DragonX Systems Category: Edge AI Accelerator

Dataflow AI Processor IP

Revolutionary dataflow architecture optimized for AI workloads with spatial compute arrays, intelligent memory hierarchies, and r…

Overview

Revolutionary dataflow architecture optimized for AI workloads with spatial compute arrays, intelligent memory hierarchies, and runtime reconfigurable elements

Dataflow-Optimized Architecture: Prometheus leverages spatial computing with reconfigurable dataflow graphs, eliminating traditional bottlenecks through intelligent data movement and compute orchestration across configurable processing elements.

Competitive Differentiation

Prometheus stands apart from key competitors through strategic advantages in performance, ecosystem, and integration

  • vs ARM Cortex: Superior performance/watt, more flexible licensing
  • vs RISC-V: Optimized microarchitecture, enhanced AI acceleration, comprehensive configuration tools
  • vs x86: Lower power, better integration, cost advantage
  • vs Custom Designs: Faster time-to-market, proven reliability

Extensive Configuration Options

Prometheus dataflow architecture offers extensive configurability across spatial compute arrays, memory hierarchies, and dataflow orchestration - optimized for diverse AI workloads from edge inference to large-scale training

  • Spatial Compute Arrays
    • Dataflow Processing Units (4-64)
    • Memory Reconfigurable Units (4-64)
    • AI Acceleration Elements (8-128)
    • Spatial grid dimensions (4x4 to 16x16)
    • Configurable tensor data paths (128-1024 bits)
  • Memory Hierarchy
    • 3-tier memory system
    • Register file (1K entries)
    • VMEM (16MB, 16-bank parallel)
    • HBM external memory
    • Configurable bank sizes (64KB-1MB)
  • Precision & Data Types
    • FP32, FP16, BF16, FP8 support
    • INT8, INT4 quantization
    • Custom precision formats
    • Dynamic precision switching
    • Mixed-precision workloads
  • Dataflow Interconnects
    • Spatial dataflow routing
    • Configurable tensor buses (64-512 bits)
    • Multi-dimensional data movement
    • AI workload-optimized QoS
    • High-bandwidth inter-chip links
  • Power & Clock Domains
    • Independent clock domains (2-8)
    • Dynamic voltage scaling
    • Power mode selection (4 modes)
    • Clock gating and power islands
    • Thermal management
  • Virtualization & Security
    • Hardware virtualization (up to 8 instances)
    • 256-bit encryption per instance
    • Memory protection units
    • Secure boot and HSM support
    • Side-channel attack protection

Key features

  • Performance Leadership: Superior performance-per-watt ratio that sets industry benchmarks
  • Highly Configurable: Dataflow architecture with configurable spatial compute arrays, memory hierarchies, and AI-optimized interconnects
  • Integration Ready: Optimized for rapid SoC integration with minimal effort
  • Future-Proof: Architecture designed for emerging workloads and technologies

Files

Note: some files may require an NDA depending on provider policy.

Specifications

Identity

Part Number
PROMETHEUS
Vendor
DragonX Systems
Type
Silicon IP

Provider

DragonX Systems
HQ: India
Making Hardware Design 10x Faster Through Higher-Order Abstraction

Learn more about Edge AI Accelerator IP core

RISC-V Based TinyML Accelerator for Depthwise Separable Convolutions in Edge AI

While lightweight architectures like MobileNetV2 employ Depthwise Separable Convolutions (DSC) to reduce computational complexity, their multi-stage design introduces a critical performance bottleneck inherent to layer-by-layer execution: the high energy and latency cost of transferring intermediate feature maps to either large on-chip buffers or off-chip DRAM. To address this memory wall, this paper introduces a novel hardware accelerator architecture that utilizes a fused pixel-wise dataflow.

Accelerating Your Development: Simplify SoC I/O with a Single Multi-Protocol SerDes IP

Enter the Multi-Protocol SerDes (Serializer/Deserializer)—a flexible, reusable IP block that allows a single PHY to support multiple serial communication protocols, such as PCIe, SATA, Ethernet, USB, and more. This approach enables SoC vendors to meet diverse customer requirements and application needs without redesigning I/O for each target market.

Frequently asked questions about Edge AI Accelerator IP cores

What is Dataflow AI Processor IP?

Dataflow AI Processor IP is a Edge AI Accelerator IP core from DragonX Systems listed on Semi IP Hub.

How should engineers evaluate this Edge AI Accelerator?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this Edge AI Accelerator IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP