Hardware-based floating-point design flow
Michael Parker, Altera Corporation
EETimes (1/17/2011 3:22 PM EST)
Floating-point processing is widely used in computing for many different applications. In most software languages, floating-point variables are denoted as “float” or double.” Integer variables are also used for what is known as fixed-point processing.
Floating-point processing utilizes a format defined in IEEE 754, and is supported by microprocessor architectures. However, the IEEE 754 format is inefficient to implement in hardware, and floating-point processing is not supported in VHDL or Verilog. Newer versions, such as SystemVerilog, allow floating-point variables, but industry-standard synthesis tools do not support floating-point technology.
In embedded computing, fixed-point or integer-based representation is often used due to the simpler circuitry and lower power needed to implement fixed-point processing compared to floating-point processing. Many embedded computing or processing operations must be implemented in hardware—either in an ASIC or an FPGA.
However, due to technology limitations, hardware-based processing is virtually always done as fixed-point processing. While many applications could benefit from floating-point processing, this technology limitation forces a fixed-point implementation. If feasible, applications in wireless communications, radar, medical imaging, and motor control all could benefit from the high dynamic range afforded by floating-point processing.
Before discussing a new approach that enables floating-point implementation in hardware with performance similar to that of fixed-point processing, it is first necessary to discuss the reason why floating-point processing has not been very practical up to this point. This paper focuses on FPGAs as the hardware-processing devices, although most of the methods discussed can be applied to any hardware architecture.
After a discussion of the challenges of implementing floating-point processing, a new approach used to overcome these issues will be presented. Next, some of the key applications for using floating-point processing, involving linear algebra, are discussed, as well as the additional features needed to support these type of designs in hardware. Performance benchmarks of FPGA floating-point processing examples are also provided.
To read the full article, click here
Related Semiconductor IP
- RVA23, Multi-cluster, Hypervisor and Android
- 64 bit RISC-V Multicore Processor with 2048-bit VLEN and AMM
- NPU IP Core for Mobile
- RISC-V AI Acceleration Platform - Scalable, standards-aligned soft chiplet IP
- H.264 Decoder
Related White Papers
- Optimizing Floorplan for STA and Timing improvement in VLSI Design Flow
- Differentiation Through the Chip Design and Verification Flow
- Four ways to build a CAD flow: In-house design to custom-EDA tool
- An Outline of the Semiconductor Chip Design Flow
Latest White Papers
- RISC-V source class riscv_asm_program_gen, the brain behind assembly instruction generator
- Concealable physical unclonable functions using vertical NAND flash memory
- Ramping Up Open-Source RISC-V Cores: Assessing the Energy Efficiency of Superscalar, Out-of-Order Execution
- Transition Fixes in 3nm Multi-Voltage SoC Design
- CXL Topology-Aware and Expander-Driven Prefetching: Unlocking SSD Performance