Inline Code in C and C++
Colin Walls, Mentor Graphics
Embedded.com (November 1, 2013)
The replacement of a function call by a copy of the code of the function can be an effective optimization, particularly if execution speed is the priority. This article takes a look at how inlining works, when it can be effective, and how it may happen automatically in C and C++.
Inlining basics
To the first approximation, all desktop computers are the same. It is straightforward to write acceptable applications that will run on anyone’s machine. Also, the broad expectations of users are the same. But embedded systems are all different – the hardware and software environment varies widely and the expectations of users are just as diverse. In many ways, this is what is particularly interesting about embedded software development.
An embedded compiler is likely to have a great many options to control optimization. Sometimes that fine-grain control is vital; on other occasions, it can come down to a simple choice between optimization for speed or size. This choice is curious, but it is simply an empirical observation that small code is often slower and fast code tends to need more memory.
An obvious example is function inlining. A small function can be optimized so that its actual code is placed in line at each call site. This executes faster because the call/return sequence is eliminated. Also stack usage may be reduced. But this method has the potential to use more memory, as there may be multiple copies of identical code. Sometimes you can get lucky and an optimization which yields faster code is also light on memory, but this is quite unusual.
To read the full article, click here
Related Semiconductor IP
- USB 2.0 Audio Devices Design Platform
- DVB-S2-LDPC-BCH
- DVB-T2 Rx Bit Chain
- MIMO Decoder
- K-Best MIMO 3×3 Decoder
Related White Papers
- Paving the way for the next generation of audio codec for True Wireless Stereo (TWS) applications - PART 5 : Cutting time to market in a safe and timely manner
- AI, and the Real Capacity Crisis in Chip Design
- Integrating VESA DSC and MIPI DSI in a System-on-Chip (SoC): Addressing Design Challenges and Leveraging Arasan IP Portfolio
- Bigger Chips, More IPs, and Mounting Challenges in Addressing the Growing Complexity of SoC Design
Latest White Papers
- Customizing a Large Language Model for VHDL Design of High-Performance Microprocessors
- CFET Beyond 3 nm: SRAM Reliability under Design-Time and Run-Time Variability
- Boosting RISC-V SoC performance for AI and ML applications
- e-GPU: An Open-Source and Configurable RISC-V Graphic Processing Unit for TinyAI Applications
- How to design secure SoCs, Part II: Key Management