On-Chip Interconnect Costs Spawn Research
Deepak Sekar, Zsolt Tőkei, and Vincent McGahay (Rambus)
EETimes (3/26/2014 07:05 PM EDT)
With 16nm chips moving to production this year, companies are actively developing the 10nm and 7nm technology nodes. These generations are interconnect heavy -- more than 50% of their cost is due to the back-end-of-line (BEOL) wiring levels, and designs are dominated by interconnect delay. Engineers are taking several paths to get around this trend, many of which will be discussed at the IITC Advanced Metallization Conference in May in San Jose.
First, interconnect performance and reliability depends heavily on diffusion barriers, liners, and cap layers for copper. These can be improved in multiple ways.
For example, engineers can make these structures thinner and improve their quality by using CVD or ALD instead of PVD and by using alternative materials. At the May conference, researchers from IBM and Applied Materials will present results of their work on multi-layer SiN caps and cobalt caps and liners that provide a 1000x improvement in electromigration lifetime, as well as enhancements in time-dependent dielectric breakdown.
To read the full article, click here
Related Semiconductor IP
- USB 20Gbps Device Controller
- AGILEX 7 R-Tile Gen5 NVMe Host IP
- 100G PAM4 Serdes PHY - 14nm
- Bluetooth Low Energy Subsystem IP
- Multi-core capable 64-bit RISC-V CPU with vector extensions
Related White Papers
- A 'network-centric' approach to on-chip interconnect
- relOBI: A Reliable Low-latency Interconnect for Tightly-Coupled On-chip Communication
- CompactPCI Interconnect Spec To Be Enhanced
- Co-Design for SOCs -> On-chip support needed for SOC debug
Latest White Papers
- CRADLE: Conversational RTL Design Space Exploration with LLM-based Multi-Agent Systems
- On the Thermal Vulnerability of 3D-Stacked High-Bandwidth Memory Architectures
- OmniSim: Simulating Hardware with C Speed and RTL Accuracy for High-Level Synthesis Designs
- Balancing Power and Performance With Task Dependencies in Multi-Core Systems
- LLM Inference with Codebook-based Q4X Quantization using the Llama.cpp Framework on RISC-V Vector CPUs