Scaling the 100 GbE Memory Wall
Michael Sporer, Director of IC Marketing, MoSys
 4/14/2014 10:15 AM EDT
All interrelated system-level tradeoffs, including performance, pin count, and area, ultimately are driven by power consumption considerations. At 100 and 400 GbE, network chip vendors must consider end-to-end solutions for equipment OEMs. To remain competitive, OEMs plan to introduce multi-terabit systems that aggregate multiple 100 Gbit/s ports on each line card.
Two current technology trends, 100 Gbit/s line speeds in network appliances and the transition to IPv6, compound design complexity. At both the network SOC and OEM appliance levels, solutions have to deliver performance, network management, and quality of service. Crucial parameters include absolute delay, delay jitter, minimum delivered bandwidth, and packet loss.[1] Network engineers monitor and manage networks based on these parameters, which also serve as the basis of contractual service-level agreements.
To read the full article, click here
Related Semiconductor IP
- LPDDR6/5X/5 PHY V2 - Intel 18A-P
 - ML-KEM Key Encapsulation & ML-DSA Digital Signature Engine
 - MIPI SoundWire I3S Peripheral IP
 - ML-DSA Digital Signature Engine
 - P1619 / 802.1ae (MACSec) GCM/XTS/CBC-AES Core
 
Related White Papers
- Combating the Memory Walls: Optimization Pathways for Long-Context Agentic LLM Inference
 - The benefit of non-volatile memory (NVM) for edge AI
 - Understanding the contenders for the Flash memory crown
 - The Growing Importance of AI Inference and the Implications for Memory Technology
 
Latest White Papers
- FeNN-DMA: A RISC-V SoC for SNN acceleration
 - Multimodal Chip Physical Design Engineer Assistant
 - Attack on a PUF-based Secure Binary Neural Network
 - BBOPlace-Bench: Benchmarking Black-Box Optimization for Chip Placement
 - FD-SOI: A Cyber-Resilient Substrate Against Laser Fault Injection—The Future Platform for Secure Automotive Electronics