Scaling the 100 GbE Memory Wall
Michael Sporer, Director of IC Marketing, MoSys
4/14/2014 10:15 AM EDT
All interrelated system-level tradeoffs, including performance, pin count, and area, ultimately are driven by power consumption considerations. At 100 and 400 GbE, network chip vendors must consider end-to-end solutions for equipment OEMs. To remain competitive, OEMs plan to introduce multi-terabit systems that aggregate multiple 100 Gbit/s ports on each line card.
Two current technology trends, 100 Gbit/s line speeds in network appliances and the transition to IPv6, compound design complexity. At both the network SOC and OEM appliance levels, solutions have to deliver performance, network management, and quality of service. Crucial parameters include absolute delay, delay jitter, minimum delivered bandwidth, and packet loss.[1] Network engineers monitor and manage networks based on these parameters, which also serve as the basis of contractual service-level agreements.
To read the full article, click here
Related Semiconductor IP
- NPU IP Core for Mobile
- NPU IP Core for Edge
- Specialized Video Processing NPU IP
- HYPERBUS™ Memory Controller
- AV1 Video Encoder IP
Related White Papers
- The benefit of non-volatile memory (NVM) for edge AI
- Understanding the contenders for the Flash memory crown
- The Growing Importance of AI Inference and the Implications for Memory Technology
- Breaking the Memory Bandwidth Boundary. GDDR7 IP Design Challenges & Solutions
Latest White Papers
- Ramping Up Open-Source RISC-V Cores: Assessing the Energy Efficiency of Superscalar, Out-of-Order Execution
- Transition Fixes in 3nm Multi-Voltage SoC Design
- CXL Topology-Aware and Expander-Driven Prefetching: Unlocking SSD Performance
- Breaking the Memory Bandwidth Boundary. GDDR7 IP Design Challenges & Solutions
- Automating NoC Design to Tackle Rising SoC Complexity