How to Reduce Code Size (and Memory Cost) Without Sacrificing Performance
Embedded.com
Nov 29 2005 (17:55 PM)
Today's intelligent compilers offer many options for squeezing more performance out of application code. Many of these optimizations, however, tend to increase overall code size.
As a result, once developers of optimized application code have reached the required performance specifications, there still remains the challenge of bringing code size back under control.
Through an iterative process of building application code using different compiler optimization options and profiling the result, developers can hone in and identify infrequently used and non-critical sections of code to trade off performance where it matters least for reduced code size, providing minimal impact on system performance. Often, varying compiler options to reduce code size can enable developers to decrease the amount of on-chip and external memory an application requires without adversely affecting performance, thereby reducing the overall bill of materials (BOM).
Related Semiconductor IP
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
- High Speed Ethernet 2/4/8-Lane 200G/400G PCS
- High Speed Ether 2/4/8-Lane 200G/400G/800G PCS
Related White Papers
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- How to build reliable FPGA memory interface controllers without writing your own RTL code!
- How to Reduce FPGA Logic Cell Usage by >x5 for Floating-Point FFTs
- How to use snakes to speed up software without slowing down the time-to-market?
Latest White Papers
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference
- CANsec: Security for the Third Generation of the CAN Bus