Overcoming the embedded CPU performance wall
Julio Diez Ruiz
EETimes (January 21, 2013)
The physical limitations of current semiconductor technology have made it increasingly difficult to achieve frequency improvements in embedded processors, and so designers are turning to parallelism in multicore architectures to achieve the high performance required for current designs. This article explains these silicon limitations and how they affect CPU performance, and indicates how engineers are overcoming this situation with multicore design.
Current status of multicore SoC design and use
The last few years there has been an increase in microprocessor architectures featuring multi-threading or multicore CPUs. They are now the rule for desktop computers, and are becoming common even for CPUs in the high-end embedded market. This increase is the result of processor designers desire to achieve higher performance. But silicon technology has reached its limit for performance. The solution to the need for ever increasing processing power depends on architectural solutions like replicating core processors inside microprocessor-based systems-on-chip (SoC's).
To read the full article, click here
Related Semiconductor IP
- LDPC Encoder/Decoder IP Core
- DDR5 RDIMM Verification IP
- DDR5 LRRDIMM Verification IP
- Xtal Oscillator on TSMC CLN7FF
- Wide Range Programmable Integer PLL on UMC L65LL
Related White Papers
- CoreMark: A realistic way to benchmark CPU performance
- Building high performance interrupt responses into an embedded SoC design
- Embedded flash process enhances performance: Product how-to
- Meeting Increasing Performance Requirements in Embedded Applications with Scalable Multicore Processors