PCI Express and Advanced Switching: different chores
PCI Express and Advanced Switching: different chores
By Mark Summer, EE Times
November 7, 2003 (3:22 p.m. EST)
URL: http://www.eetimes.com/story/OEG20031107S0044
The last few years have presented a very challenging interconnect environment for OEMs. Technologies have been rapidly evolving, and there have been a bewildering array of interconnects from which to choose. More to the point, there have been far too many interconnects to accommodate in product designs. This situation can be resolved by achieving several goals. First, the number of interconnects must be reduced so that silicon from multiple vendors can be easily mixed and matched. Each interconnect in this minimum set must be both technically robust and economically viable. This is necessary to cover the usage models, remove the need for proprietary interconnects and support the companies making interconnect silicon. Since the end game is clearly switch-fabric interconnects, a necessary second step is to ensure multiple sources of bridges and switches that are fully interoperable. Realistically, bridges will be heavily used while the interconn ect situation sorts itself out. Switches will be used for the long term. Together with a small number of other interconnects--including Ethernet and focused dataplane interconnects--PCI Express technology and Advanced Switching can achieve these goals for OEMs and other system designers. PCI Express and Advanced Switching are envisioned as a two-part solution to a two-part problem. An analysis of interconnect requirements for both chassis and appliance-based communications equipment has shown that customer technical requirements divide broadly into two models. The first model can be called “local interconnect.” It is predominantly though but not entirely between chips on a board, is closely coupled to software running on general-purpose CPUs and has a legacy of open standard implementations. The second model can be called “system fabric.” The features for this model are determined primarily by mid-range communication system backplanes, notably unified data/control planes, and include con gestion management, handling of arbitrary protocols, quality of service and multicasting. Note that the fundamental technical requirements drive two different technologies. A single technology trying to do both would end up being too complex and expensive for local interconnect applications, or lacking in features for system fabrics. Thus, PCI Express and Advanced Switching implement specific transaction layers on top of a common foundation of PHY and link layers. This common foundation minimizes time-to-market and life-cycle costs in every part of the industry, from vendors to OEMs. Interconnect for local I/O has historically been closely linked to the load/store paradigm of software running on general-purpose processors. It includes the use of all forms of peripheral and I/O devices in single-host systems, and has culminated in the ubiquity of PCI today. Its natural migration path to PCI Express retains full compatibility with PCI and PCI-X, but adds bandwidth, scalability, quality of serv ice and manageability. This migration will accelerate quickly, but will also take many years to complete due to the long life cycles of communications equipment. Advanced Switching can support local I/O, but provides more functionality than simple peripherals require. While Advanced Switching reuses the physical and link layers of PCI Express, it is not a replacement for it, and therefore provides a protocol interface (PI) dedicated to PCI Express technology. The simplest interprocessor interconnect is a coprocessor, which almost universally uses PCI today, with a natural migration to PCI Express tomorrow. Its load/store protocol is sufficient for control and management transactions, and DMA provides satisfactory support for block transfers such as packet payloads. Adding nontransparent bridges to PCI Express extends its capability to multiple independent processors. Simple message passing can be supported using memory buffers, similar to what has been done in VMEbus systems for years. However , this does not support true datagrams, quality-of-service, or fault isolation, and therefore does not scale well. Supporting dual-host processors with fail-over is about the limit of PCI Express technology's ability. As we increase the number and type of connected processors or add dataplane features, Advanced Switching comes into play. It can support either load/store or datagram-based control traffic, or any form of dataplane traffic. Advanced Switching scales easily across any number of processors. It can accommodate any traffic pattern and arbitrary topologies. For instance, a deep packet-inspection function such as intrusion detection requires tightly coupled NPUs and general-purpose CPUs. These units must exchange both control information and packets, either raw off the line or preprocessed. Advanced Switching was designed to handle just these types of applications. Mezzanine cards are broadly used in communications systems, and potentially cover a bewildering variety of cards. Their i mplementation with PCI Express or Advanced Switching can be simplified by viewing mezzanines as a simple exercise in topology: taking a system-block diagram and allocating components to the baseboard or to N mezzanine cards. Therefore, PCI Express covers traditional I/O cards while Advanced Switching or another dataplane interconnect covers line-card I/O, such as an ATM or Sonet framer. PCI Express technology covers traditional coprocessors like crypto accelerators, while Advanced Switching covers truly packet-oriented coprocessors such as DSP farms for voice-over-packet applications. A single-processor mezzanine can use PCI Express, while multiple-processor mezzanines would use Advanced Switching. Control-plane components such as a line-card control processor would use PCI Express, while a dataplane component such as an NPU would use Advanced Switching. Only relatively simple backplanes are suitable for a PCI Express implementation. This includes the trivial case of one host card plus “dumb” I/O cards, up to a system with dual-redundant host cards and intelligent I/O cards. The latter could have local processor if it were behind a nontransparent bridge. A full communications backplane really requires Advanced Switching to provide data and control plane features such as quality-of service, congestion management, redundancy, rapid fail-over, manageability, arbitrary protocols and topologies. Advanced Switching enables as smooth a migration as possible from existing backplane technologies and protocols, while maintaining product differentiation and dramatically lowering costs. Next-generation modular communications platforms gain an advantage from open-standard interconnects. They must be true open standards, and satisfy a broad range of usage models ranging from chip-to-chip control traffic all the way up to unified backplanes. The combination of PCI Express and Advanced Switching uniquely meets these requirements. Together with other key standards such as Ethernet and interconnects such as SPI 4.2, they will help fulfill the promise of modular communication platforms. Mark Summers is senior engineer, Embedded Intel Architecture Division, Intel Corp. (Hillsboro, Ore.).
Related Semiconductor IP
- RISC-V CPU IP
- AES GCM IP Core
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
Related White Papers
- Advanced switching boosts PCI Express
- With StarFabric as an on-ramp, the PCI Express Advanced Switching is ready
- The PCI Express Architecture and Advanced Switching
- How to design FPGA-based advanced PCI Express endpoint solutions
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference