A RISC-V ISA Extension For Ultra-Low Power IoT Wireless Signal Processing
By Hela Belhadj Amor, Carolynn Bernier, ZDeněk Přikryl
This work presents an instruction-set extension to the open-source RISC-V ISA (RV32IM) dedicated to ultra-low power (ULP) software-defined wireless IoT transceivers. The custom instructions are tailored to the needs of 8/16/32-bit integer complex arithmetic typically required by quadrature modulations. The proposed extension occupies only 2 major opcodes and most instructions are designed to come at a near-zero energy cost. Both an instruction accurate (IA) and a cycle accurate (CA) model of the new architecture are used to evaluate six IoT baseband processing test benches including FSK demodulation and LoRa preamble detection. Simulation results show cycle count improvements from 19% to 68%. Post synthesis simulations for a target 22nm FD-SOI technology show less than 1% power and 28% area overheads, respectively, relative to a baseline RV32IM design. Power simulations show a peak power consumption of 380 μW for Bluetooth LE demodulation and 225 μW for LoRa preamble detection (BW = 500 kHz, SF = 11).
Related Semiconductor IP
- RISC-V Vector Extension
- RISC-V Real-time Processor
- RISC-V High Performance Processor
- 32b/64b RISC-V 5-stage, scalar, in-order, Application Processor. Linux and multi-core capable. Maps upto ARM A-35. Optimal PPA.
- 32 Bit - Embedded RISC-V Processor Core
Related White Papers
- Extending RISC-V ISA With a Custom Instruction Set Extension
- ISA optimizations for hardware and software harmony: Custom instructions and RISC-V extensions
- Unlock the processing power of wireless modules
- Think Big for Ultra-Low Power IoT SoCs
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference