PermuteV: A Performant Side-channel-Resistant RISC-V Core Securing Edge AI Inference
By Nuntipat Narkthong, Xiaolin Xu
Northeastern University

Abstract
Edge AI inference is becoming prevalent thanks to the emergence of small yet high-performance microprocessors. This shift from cloud to edge processing brings several benefits in terms of energy savings, improved latency, and increased privacy. On the downside, bringing computation to the edge makes them more vulnerable to physical side-channel attacks (SCA), which aim to extract the confidentiality of neural network models, e.g., architecture and weight. To address this growing threat, we propose PermuteV, a performant side-channel resistant RISC-V core designed to secure neural network inference. PermuteV employs a hardware-accelerated defense mechanism that randomly permutes the execution order of loop iterations, thereby obfuscating the electromagnetic (EM) signature associated with sensitive operations. We implement PermuteV on FPGA and perform evaluations in terms of side-channel security, hardware area, and runtime overhead. The experimental results demonstrate that PermuteV can effectively defend against EM SCA with minimal area and runtime overhead.
Index Terms: Side-Channel, Defense, Microarchitecture, RISC-V
To read the full article, click here
Related Semiconductor IP
- Gen#2 of 64-bit RISC-V core with out-of-order pipeline based complex
- Compact Embedded RISC-V Processor
- Multi-core capable 64-bit RISC-V CPU with vector extensions
- 64 bit RISC-V Multicore Processor with 2048-bit VLEN and AMM
- RISC-V AI Acceleration Platform - Scalable, standards-aligned soft chiplet IP
Related Articles
- AI Edge Inference is Totally Different to Data Center
- The Expanding Markets for Edge AI Inference
- FPGA-Accelerated RISC-V ISA Extensions for Efficient Neural Network Inference on Edge Devices
- RISC-V Based TinyML Accelerator for Depthwise Separable Convolutions in Edge AI
Latest Articles
- PermuteV: A Performant Side-channel-Resistant RISC-V Core Securing Edge AI Inference
- Making Strong Error-Correcting Codes Work Effectively for HBM in AI Inference
- Sensitivity-Aware Mixed-Precision Quantization for ReRAM-based Computing-in-Memory
- ElfCore: A 28nm Neural Processor Enabling Dynamic Structured Sparse Training and Online Self-Supervised Learning with Activity-Dependent Weight Update
- A 14ns-Latency 9Gb/s 0.44mm² 62pJ/b Short-Blocklength LDPC Decoder ASIC in 22FDX