Deliver at 100G: The impact of smart memory
Michael Miller, Mosys
 EDN (July 24, 2014) 
Middleware and full function appliance box design teams face the daunting challenge of developing and meeting the performance requirements for next generation 100Gbps. Using general-purpose multi-core CPU arrays provides the flexibility needed to support emerging trends like SDN and NFV. However the packet inspection and buffering functions point to the need to direct traffic to achieve load-balanced cores. Meeting these requirements in the same form factor as earlier generation 10Gbps and 40Gbps products can seem next to impossible. With new developments in serial memory solutions, middleware systems can now buffer, process packets, and steer traffic to general purpose CPU arrays 2-10 times faster than previous 10G or 40G solutions.
 
 The Form Factor Challenge: Fitting 100G in a 40G Box
 To address the form factor issue, design teams at both the device and appliance level need to understand the importance of the trends in 100G networks. Although 10G and 40G architectures are similar to 100G, 100G (and higher) designs pose trade-off challenges compounded by multiple factors. These include higher bandwidths and an order of magnitude increase in the rate of lookups. This increase is being driven by requirements for more statistics to monitor performance, increased security measures, and expanded functionality.
 
 Since the rate of lookups is growing faster than the CPU can process, this growth fuels the need for pre-filtering. Like every other electronic design, 100G (and up) architectures face the same trade-offs of area, power, performance, and pin count. Traditional memory solutions exhibit ongoing access rate issues versus processing speeds. If you combine traditional memory access rates with the maximum 2 Gbits per pin capacity, it leads to the conclusion that there simply aren’t enough available pins. In addition, if the design were able to have sufficient pins, the amount of memory required poses a significant challenge because of the area consumed.
To read the full article, click here
Related Semiconductor IP
- LPDDR6/5X/5 PHY V2 - Intel 18A-P
 - ML-KEM Key Encapsulation & ML-DSA Digital Signature Engine
 - MIPI SoundWire I3S Peripheral IP
 - ML-DSA Digital Signature Engine
 - P1619 / 802.1ae (MACSec) GCM/XTS/CBC-AES Core
 
Related White Papers
- MIPI in next generation of AI IoT devices at the edge
 - The benefit of non-volatile memory (NVM) for edge AI
 - The Growing Importance of AI Inference and the Implications for Memory Technology
 - On the Thermal Vulnerability of 3D-Stacked High-Bandwidth Memory Architectures
 
Latest White Papers
- FeNN-DMA: A RISC-V SoC for SNN acceleration
 - Multimodal Chip Physical Design Engineer Assistant
 - Attack on a PUF-based Secure Binary Neural Network
 - BBOPlace-Bench: Benchmarking Black-Box Optimization for Chip Placement
 - FD-SOI: A Cyber-Resilient Substrate Against Laser Fault Injection—The Future Platform for Secure Automotive Electronics