Vendor: Gyrus AI Category: Edge AI Accelerator

AI Processor Accelerator

Introducing Gyrus's ground-breaking AI Processor Accelerator IP, coupled with a native graph processing software stack, is the ul…

Overview

Introducing Gyrus's ground-breaking AI Processor Accelerator IP, coupled with a native graph processing software stack, is the ultimate solution for seamless Neural Network implementation. We have cracked the code on scalability, programmability, and power consumption with a tightly integrated Hardware & Software approach.
The secret to our success lies in our efficient utilisation of compute elements and intelligent memory reuse through reinforcement learning based software, ensuring a seamless ?ow of data to the compute engines with minimal cycles unused. With Gyrus's compilers and software tools, you can effortlessly port any neural network to our hardware accelerator, unlocking exceptional efciency, even with substantial activations and weights.

Our compilers streamline hardware con?guration, reducing SOC complexity and power consumption while enabling AI algorithms to run smoothly on edge devices. The scheduler is a neural scheduler search based on Reinforcement learning. With a cycle-accurate C-Model, we create a Digital Twin of the NNA IP, ensuring long-term model deployment efciency. Elevate your edge device capabilities with Cortisoft from Gyrus!

Key features

  • OPTIMIZED COMPUTATION - >80% Utilization
  • LOW MEMORY - 16X Reduced
  • SPEED - 10-30x Lower Clock Cycles
  • HIGH Efficiency - 30 TOPs/W
  • LOW POWER - 10-20x lower
  • SMALL AREA - 10-8x smaller die area
  • Scalable RTL via parameters for performance and power
  • Number of ALU
  • Number of clusters
  • Activation memory size per cluster
  • (Local memory can be 256KB, 512KB typical) DDR or No DDR – external memory
  • Internal system memory
  • External shared memory (optional)
  • No long interconnects or interconnect fabric Designed for high speed and/or HVt cells Synthesized and P&R complete at 800MHz, 7nm Hardware con?guration input to compiler

Block Diagram

Benefits

  • Universal Compatibility: Supports any framework, neural network, and backbone.
  • Large Input Frame Handling: Accommodates large input frames without downsizing.
  • Parallel Design: Achieves high performance at low operational frequency.
  • Memory Effciency: Reduces memory usage with data traversal-based optimization.
  • Versatile Stationary Modes: Effciently manages both input-stationary and weight-stationary setups.
  • Model & Activation: Memory stays in sleep most of the time.
  • Graph SIMD Compiler: Enables effcient network deployment.
  • Optimal Data Movement and Compute Instructions:
  • Maximizes AI performance.
  • Memory Architecture: Drastically minimizes data movement and conserves memory.
  • Sparse NN Implementation: Effciently handles sparse neural networks, reducingmodel size and compute demands by over 3 times.
  • Minimal Host Code Dependency:Requires very low host code support for AI workloads.

Applications

  • Automotive
  • Smart Devices
  • Security & Surveillance
  • IoT
  • High Performance Computing
  • Robotics

Files

Note: some files may require an NDA depending on provider policy.

Specifications

Identity

Part Number
NNA
Vendor
Gyrus AI

Provider

Gyrus AI
HQ: United States
Gyrus AI develops Neural Network Accelerators, AI/ML models, Algorithms, and Frameworks in the areas of Video Processing and Video Analytics. Gyrus AI has built AI Processor IP and the solution is a combination of hardware and software stack. Gyrus develops ready to deploy NN models for Video Processing such as Video Anonymization, Video Upscaling, Activity Detection & Process Monitoring, In scene replacement, etc for Surveillance, Broadcasting, Automotive, Healthcare and industrial Sectors.

Learn more about Edge AI Accelerator IP core

RISC-V Based TinyML Accelerator for Depthwise Separable Convolutions in Edge AI

While lightweight architectures like MobileNetV2 employ Depthwise Separable Convolutions (DSC) to reduce computational complexity, their multi-stage design introduces a critical performance bottleneck inherent to layer-by-layer execution: the high energy and latency cost of transferring intermediate feature maps to either large on-chip buffers or off-chip DRAM. To address this memory wall, this paper introduces a novel hardware accelerator architecture that utilizes a fused pixel-wise dataflow.

Accelerating Your Development: Simplify SoC I/O with a Single Multi-Protocol SerDes IP

Enter the Multi-Protocol SerDes (Serializer/Deserializer)—a flexible, reusable IP block that allows a single PHY to support multiple serial communication protocols, such as PCIe, SATA, Ethernet, USB, and more. This approach enables SoC vendors to meet diverse customer requirements and application needs without redesigning I/O for each target market.

Frequently asked questions about Edge AI Accelerator IP cores

What is AI Processor Accelerator?

AI Processor Accelerator is a Edge AI Accelerator IP core from Gyrus AI listed on Semi IP Hub.

How should engineers evaluate this Edge AI Accelerator?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this Edge AI Accelerator IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP