Demystifying multithreading and multi-core
Kevin D. Kissell, MIPS Technologies
(09/26/2007 4:46 PM EDT), EE Times
Is multithreading better than multi-core? Is multi-core better than multithreading? The fact is that the best vehicle for a given application might have one, the other or both. Or neither. They are independent (but complementary) design decisions. As multithreaded processors and multi-core chips become the norm, architects and designers of digital systems need to understand their respective attributes, advantages and disadvantages.
Both multithreading and multi-core approaches exploit the concurrency in a computational workload. The cost, in silicon, energy and complexity, of making a CPU run a single instruction stream ever faster goes up nonlinearly and eventually hits a wall imposed by the physical limitations of circuit technology. That wall keeps moving out a little farther every year, but cost- and power-sensitive designs are constrained to follow the bleeding edge from a safe distance. Fortunately, virtually all computer applications have some degree of concurrency: At least some of the time, two or more independent tasks need to be performed simultaneously. Taking advantage of concurrency to improve computing performance and efficiency isn't always trivial, but it's certainly easier than violating the laws of physics.
Multi-processor, or multi-core, systems exploit concurrency to spread work around a system. As many software tasks can run at the same time as there are processors in the system. This tractability can be used to improve absolute performance, cost or power/performance. Clearly, once one has built the fastest single processor possible in a given technology, the only way to get even more computer power is to use more than one of these processors. More subtly, if a load that would saturate a 1GHz processor could be spread evenly across 4 processors, those processors could be run at roughly 250MHz each. If each 250MHz processor is less than a quarter the size of the 1GHz processor, or consumes less than a quarter the power (either of which may be the case because of the nonlinear cost of higher operating frequencies), the multi-core system might be more economical.
(09/26/2007 4:46 PM EDT), EE Times
Is multithreading better than multi-core? Is multi-core better than multithreading? The fact is that the best vehicle for a given application might have one, the other or both. Or neither. They are independent (but complementary) design decisions. As multithreaded processors and multi-core chips become the norm, architects and designers of digital systems need to understand their respective attributes, advantages and disadvantages.
Both multithreading and multi-core approaches exploit the concurrency in a computational workload. The cost, in silicon, energy and complexity, of making a CPU run a single instruction stream ever faster goes up nonlinearly and eventually hits a wall imposed by the physical limitations of circuit technology. That wall keeps moving out a little farther every year, but cost- and power-sensitive designs are constrained to follow the bleeding edge from a safe distance. Fortunately, virtually all computer applications have some degree of concurrency: At least some of the time, two or more independent tasks need to be performed simultaneously. Taking advantage of concurrency to improve computing performance and efficiency isn't always trivial, but it's certainly easier than violating the laws of physics.
Multi-processor, or multi-core, systems exploit concurrency to spread work around a system. As many software tasks can run at the same time as there are processors in the system. This tractability can be used to improve absolute performance, cost or power/performance. Clearly, once one has built the fastest single processor possible in a given technology, the only way to get even more computer power is to use more than one of these processors. More subtly, if a load that would saturate a 1GHz processor could be spread evenly across 4 processors, those processors could be run at roughly 250MHz each. If each 250MHz processor is less than a quarter the size of the 1GHz processor, or consumes less than a quarter the power (either of which may be the case because of the nonlinear cost of higher operating frequencies), the multi-core system might be more economical.
To read the full article, click here
Related Semiconductor IP
- HBM4 PHY IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- MIPI D-PHY and FPD-Link (LVDS) Combinational Transmitter for TSMC 22nm ULP
- HBM4 Controller IP
- IPSEC AES-256-GCM (Standalone IPsec)
Related Articles
- Reusable debug infrastructure in multi core SoC : Embedded WiFi case study
- Communication centric test and debug infrastructure for multi core SoC
- Interstellar: Fully Partitioned and Efficient Security Monitoring Hardware Near a Processor Core for Protecting Systems against Attacks on Privileged Software
- Design and implementation of a hardened cryptographic coprocessor for a RISC-V 128-bit core
Latest Articles
- A 14ns-Latency 9Gb/s 0.44mm² 62pJ/b Short-Blocklength LDPC Decoder ASIC in 22FDX
- Pipeline Stage Resolved Timing Characterization of FPGA and ASIC Implementations of a RISC V Processor
- Lyra: A Hardware-Accelerated RISC-V Verification Framework with Generative Model-Based Processor Fuzzing
- Leveraging FPGAs for Homomorphic Matrix-Vector Multiplication in Oblivious Message Retrieval
- Extending and Accelerating Inner Product Masking with Fault Detection via Instruction Set Extension