Scale network processors to 40 Gbps and beyond
Brian Alleyne and Man Trinh, Bay Microsystems, Inc.
Jun 28, 2006 (10:46 AM), CommsDesign
The networking industry has witnessed countless advances since the 1960s. Yet, despite a myriad of changes in applications, protocols and technologies over nearly a half-century, one thing has not changed--the ever increasing need for speed.
The "benchmark bandwidth" required of networking equipment increases by approximately an order of magnitude every decade or so. In the 1960s, 10 kbps was sufficient to connect terminals to mainframes. With the debut of distributed client/server computing in the 1980s, typical data rates increased to the 10-Mbps range with Ethernet and Token Ring LANs. Today's local- and wide-area networks now demand multiple Gigabits-per-second of throughput. And with the advent of IPTV and other bandwidth-hungry applications, tomorrow's networks will require substantially more capacity.
Over the years, the technologies employed to keep pace with bandwidth and its associated performance requirements have also evolved. Ordinary off-the-shelf processors worked well enough for a while. Then along came the custom-designed and application-specific integrated circuits (ASICs) needed to process critical protocols at very high data rates. However, as the number of protocols continued to proliferate, the use of specialized processors and architectures made development projects considerably more complex.
All throughout this period, the industry has pursued a worthy goal--use general-purpose programmable network processors to lower development costs and accelerate the time-to-market for new products and features. This article explores why fulfilling the promise of the network processor has remained so difficult, and outlines how a pipelined architecture can achieve this elusive goal.
Jun 28, 2006 (10:46 AM), CommsDesign
The networking industry has witnessed countless advances since the 1960s. Yet, despite a myriad of changes in applications, protocols and technologies over nearly a half-century, one thing has not changed--the ever increasing need for speed.
The "benchmark bandwidth" required of networking equipment increases by approximately an order of magnitude every decade or so. In the 1960s, 10 kbps was sufficient to connect terminals to mainframes. With the debut of distributed client/server computing in the 1980s, typical data rates increased to the 10-Mbps range with Ethernet and Token Ring LANs. Today's local- and wide-area networks now demand multiple Gigabits-per-second of throughput. And with the advent of IPTV and other bandwidth-hungry applications, tomorrow's networks will require substantially more capacity.
Over the years, the technologies employed to keep pace with bandwidth and its associated performance requirements have also evolved. Ordinary off-the-shelf processors worked well enough for a while. Then along came the custom-designed and application-specific integrated circuits (ASICs) needed to process critical protocols at very high data rates. However, as the number of protocols continued to proliferate, the use of specialized processors and architectures made development projects considerably more complex.
All throughout this period, the industry has pursued a worthy goal--use general-purpose programmable network processors to lower development costs and accelerate the time-to-market for new products and features. This article explores why fulfilling the promise of the network processor has remained so difficult, and outlines how a pipelined architecture can achieve this elusive goal.
Related Semiconductor IP
- eUSB2V2.0 Controller + PHY IP
- I/O Library with LVDS in SkyWater 90nm
- 50G PON LDPC Encoder/Decoder
- UALink Controller
- RISC-V Debug & Trace IP
Related Articles
- SOC: Submicron Issues -> SiPs enable new network processors
- Network processors need a new programming methodology
- A 24 Processors System on Chip FPGA Design with Network on Chip
- Connecting the Digital World - The Path to 224 Gbps Serial Links
Latest Articles
- ChipBench: A Next-Step Benchmark for Evaluating LLM Performance in AI-Aided Chip Design
- COVERT: Trojan Detection in COTS Hardware via Statistical Activation of Microarchitectural Events
- A Reconfigurable Framework for AI-FPGA Agent Integration and Acceleration
- Veri-Sure: A Contract-Aware Multi-Agent Framework with Temporal Tracing and Formal Verification for Correct RTL Code Generation
- FlexLLM: Composable HLS Library for Flexible Hybrid LLM Accelerator Design