What's Next for Multi-Die Systems in 2024?
By Shekhar Kapoor, Synopsys
It’s hard to imagine the level of systemic scale and complexity that is required to create a world that is truly smart. Applications such as ChatGPT, something we can’t live without, require massive amounts of data to function. The dataset of 300 billion words it was trained on, 60 million daily visits, and more than 10 million queries every day as of June 2023 are just the beginning. The more sophisticated technologies such as AI and high-performance computing (HPC) become, the greater the bandwidth and compute power they depend on.
Multi-die system architectures offer the means for innovation to continue to accelerate as Moore’s law slows, across areas from generative AI to autonomous vehicles and hyperscale data centers. While we are already seeing movement in this direction and will continue to see progress in 2024, uptake is nuanced, and design currently exists in a middle ground from 2D right up to 3D (even extending to 3.5D in some cases) according to performance, power, and area (PPA) requirements — or, more specifically, performance, power, form factor, and cost.
The smart future relies on multi-die system design, but it will need assistance to become a widespread reality in the coming year and beyond. Here are four of the top multi-die system design predictions coming in 2024.
To read the full article, click here
Related Semiconductor IP
- JESD204E Controller IP
- eUSB2V2.0 Controller + PHY IP
- I/O Library with LVDS in SkyWater 90nm
- 50G PON LDPC Encoder/Decoder
- UALink Controller
Related Articles
- RISC-V in 2025: Progress, Challenges,and What’s Next for Automotive & OpenHardware
- The SoC design: What’s next for NoCs?
- ChipBench: A Next-Step Benchmark for Evaluating LLM Performance in AI-Aided Chip Design
- Systems and Integration : What's next in Compact PCI?
Latest Articles
- Crypto-RV: High-Efficiency FPGA-Based RISC-V Cryptographic Co-Processor for IoT Security
- In-Pipeline Integration of Digital In-Memory-Computing into RISC-V Vector Architecture to Accelerate Deep Learning
- QMC: Efficient SLM Edge Inference via Outlier-Aware Quantization and Emergent Memories Co-Design
- ChipBench: A Next-Step Benchmark for Evaluating LLM Performance in AI-Aided Chip Design
- COVERT: Trojan Detection in COTS Hardware via Statistical Activation of Microarchitectural Events