C-Language techniques for FPGA acceleration of embedded software
By David Pellerin (ImpulseC) and Kunal Shenoy (Xilinx)
Mar 31 2006 (14:19 PM), Courtesy of Programmable Logic DesignLine
Developers of embedded and high-performance systems are taking increased advantage of FPGAs for hardware-accelerated computing. FPGA computing platforms effectively bridge the gap between software programmable systems based on traditional microprocessors and systems based on custom hardware functions. Advances in design tools have made it easier to create hardware-accelerated applications directly from C language representations, but it is important to understand how to use these tools to the best advantage, and how decisions made during the design and programming of mixed hardware/software systems will impact overall performance.
This paper presents a brief overview of modern FPGA-based platforms and related software-to-hardware tools, then moves quickly into a set of examples showing how computationally-intensive algorithms can be written, analyzed and optimized for increased performance.
To read the full article, click here
Related Semiconductor IP
- Process/Voltage/Temperature Sensor with Self-calibration (Supply voltage 1.2V) - TSMC 3nm N3P
- USB 20Gbps Device Controller
- SM4 Cipher Engine
- Ultra-High-Speed Time-Interleaved 7-bit 64GSPS ADC on 3nm
- Fault Tolerant DDR2/DDR3/DDR4 Memory controller
Related White Papers
- Software Infrastructure of an embedded Video Processor Core for Multimedia Solutions
- Optimizing embedded software for real-time multimedia processing
- Open-Source Design of Heterogeneous SoCs for AI Acceleration: the PULP Platform Experience
- Customizing a Large Language Model for VHDL Design of High-Performance Microprocessors
Latest White Papers
- Fault Injection in On-Chip Interconnects: A Comparative Study of Wishbone, AXI-Lite, and AXI
- eFPGA – Hidden Engine of Tomorrow’s High-Frequency Trading Systems
- aTENNuate: Optimized Real-time Speech Enhancement with Deep SSMs on RawAudio
- Combating the Memory Walls: Optimization Pathways for Long-Context Agentic LLM Inference
- Hardware Acceleration of Kolmogorov-Arnold Network (KAN) in Large-Scale Systems