Using nextgen PCI Express switches to eliminate network I/O bottlenecks
(02/06/08, 03:43:00 PM EST) -- Embedded.com
Controllers in today's network-connected embedded systems often are overwhelmed by the data streaming to and from the various I/O sources; it can be difficult for the system's root complex to absorb high-speed bursty traffic such as 10Gig Ethernet when it competes with very fast streaming data from sources such as InfiniBand and Fibre Channel (FC) storage elements.
For example, when a few bytes of Ethernet data get stuck behind large packets of FC data in the root complex, the latency that is introduced by this congestion will severely impact system response time and create bandwidth limitations (see Table 1 below).
Table 1. Ethernet latency bandwidth tradeoffs
The next generation of PCI Express (PCIe) switches have added many new features to mitigate the effects of having to process competing data protocols, thereby improving overall system performance.
Advanced new features such as Read Pacing, enhanced port configuration flexibility, dynamic buffer memory allocation, and the deployment of PCIe Gen2 signaling are reducing I/O bottlenecks, providing dramatic improvements in system performance in server and storage controllers.
Related Semiconductor IP
- JESD204D Transmitter and Receiver IP
- 100G UDP IP Stack
- Frequency Synthesizer
- Temperature Sensor IP
- LVDS Driver/Buffer
Related White Papers
- How HyperTransport and PCI Express complement each other
- Advanced switching boosts PCI Express
- Compatibility issue slows PCI Express
- With StarFabric as an on-ramp, the PCI Express Advanced Switching is ready
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference