Will your multicore SoC hit the memory wall? Will the memory wall hit your SoC? Does it matter?
Multicore SoC and processor designs were our solution to the death of Dennard Scaling when IC process geometries dropped below 90nm, when processor speeds hit 3GHz, and when processor power consumption went off the charts. Since 2004, we’ve transformed Moore’s Law into a processor-core replicator, spending transistors on more processor cores rather than bigger, smarter, faster processor cores. But there’s a storm brewing once more, heralded by the dismal utilization of supercomputers that run hundreds to hundreds of thousands of processors in parallel. Currently, per-core processor utilization in supercomputers is less than 10% and falling due to memory and I/O limitations. If we don’t want the same thing to happen to our multicore SoC designs, we need to find a new path that allows processor utilization to scale along with processor core count.
Related Semiconductor IP
- AES GCM IP Core
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
- High Speed Ethernet 2/4/8-Lane 200G/400G PCS
Related Blogs
- Makimoto's Wave Revisited for Multicore SoC Design
- How to Build a Deadlock-Free Multi-cores SoC?
- ARM show new AMBA specs: think FPGAs and multicore
- How far can multicore SoC scaling go? Cavium's Octeon II
Latest Blogs
- Why Choose Hard IP for Embedded FPGA in Aerospace and Defense Applications
- Migrating the CPU IP Development from MIPS to RISC-V Instruction Set Architecture
- Quintauris: Accelerating RISC-V Innovation for next-gen Hardware
- Say Goodbye to Limits and Hello to Freedom of Scalability in the MIPS P8700
- Why is Hard IP a Better Solution for Embedded FPGA (eFPGA) Technology?