Putting Multicore Processing in Context: Part 2
Mar 7 2006 (10:00 AM)
Is it a given that utilizing multicores will result in a speedup of an application? Amdahl’s Law is not the only thing that plays a role in the speedup of an application.
In general, if speedup is the sole objective when adding multiprocessors, the following must hold true: (1) the processor is overloaded and is not processing the work available in a satisfactory time frame; (2) the workload contains elements that can be divided and worked on in parallel; and, (3) a suitably faster processor cannot provide the processing power needed to handle the workload in a satisfactory time.
Part 1 in this series examined the “classic” reasons why one does not get a proportional increase in performance by adding additional processors to a computing machine. Most, if not all of them, were based in some form or fashion on Amdahl’s Law.
Basically, Amdahl’s Law states that the upper limit on the speedup gained by adding additional processors is determined by the amount of serial code that is contained in the application. Some of the reasons for serialized code are that it is explicitly written into the code. Another reason why the code becomes serialized is because the code shares resources. This includes data sharing. Only one processor or core can access shared data at a time.
The next step in the exploration of multicore processing and whether or not it will be of benefit in your application is the hardware. Most embedded designs use shared memory (all cores are able to access some or all of the memory on the chip) and they have the capability to communicate with each other in some fashion. For most applications, the addition of more cores does not lead to a proportional increase in performance.
To read the full article, click here
Related Semiconductor IP
- Power-OK Monitor
- RISC-V-Based, Open Source AI Accelerator for the Edge
- Securyzr™ neo Core Platform
- 112G Multi-SerDes
- SHA3 Cryptographic Hash Cores
Related White Papers
- Putting multicore processing in context: Part One
- Where Innovation Is Happening in Geolocation. Part 1: Signal Processing
- Achieving multicore performance in a single core SoC design using a multi-threaded virtual multiprocessor: Part 2
- The challenges of next-gen multicore networks-on-chip systems: Part 2
Latest White Papers
- Practical Considerations of LDPC Decoder Design in Communications Systems
- A Direct Memory Access Controller (DMAC) for Irregular Data Transfers on RISC-V Linux Systems
- A logically correct SoC design isn’t an optimized design
- AI in VLSI Physical Design: Opportunities and Challenges
- cMPI: Using CXL Memory Sharing for MPI One-Sided and Two-Sided Inter-Node Communications