FeNN-DMA: A RISC-V SoC for SNN acceleration
By Zainab Aizaz, James C. Knight, and Thomas Nowotny
School of Engineering and Informatics, University of Sussex, Brighton, UK

Abstract
Spiking Neural Networks (SNNs) are a promising, energy-efficient alternative to standard Artificial Neural Networks (ANNs) and are particularly well-suited to spatio-temporal tasks such as keyword spotting and video classification. However, SNNs have a much lower arithmetic intensity than ANNs and are therefore not well-matched to standard accelerators like GPUs and TPUs. Field Programmable Gate Arrays (FPGAs) are designed for such memory-bound workloads and here we develop a novel, fully-programmable RISC-V-based system-on-chip (FeNN-DMA), tailored to simulating SNNs on modern UltraScale+ FPGAs. We show that FeNN-DMA has comparable resource usage and energy requirements to state-of-the-art fixed-function SNN accelerators, yet it is capable of simulating much larger and more complex models. Using this functionality, we demonstrate state-of-the-art classification accuracy on the Spiking Heidelberg Digits and Neuromorphic MNIST tasks.
Index Terms—Spiking Neural Networks (SNN), Field Programmable Gate Array (FPGA), RISC-V, Vector processor.
To read the full article, click here
Related Semiconductor IP
- Gen#2 of 64-bit RISC-V core with out-of-order pipeline based complex
- Compact Embedded RISC-V Processor
- Multi-core capable 64-bit RISC-V CPU with vector extensions
- 64 bit RISC-V Multicore Processor with 2048-bit VLEN and AMM
- RISC-V AI Acceleration Platform - Scalable, standards-aligned soft chiplet IP
Related Articles
- A RISC-V Multicore and GPU SoC Platform with a Qualifiable Software Stack for Safety Critical Systems
- Why RISC-V is a viable option for safety-critical applications
- CAST Provides a Functional Safety RISC-V Processor IP for Microchip FPGAs
- Design and implementation of a hardened cryptographic coprocessor for a RISC-V 128-bit core
Latest Articles
- Modeling and Optimizing Performance Bottlenecks for Neuromorphic Accelerators
- RISC-V Based TinyML Accelerator for Depthwise Separable Convolutions in Edge AI
- Exclude Smart in Functional Coverage
- A 0.32 mm² 100 Mb/s 223 mW ASIC in 22FDX for Joint Jammer Mitigation, Channel Estimation, and SIMO Data Detection
- Towards a Formal Verification of Secure Vehicle Software Updates