How HyperTransport and PCI Express complement each other
How HyperTransport and PCI Express complement each other
By Michael Sarpa, Marketing Manager, PCI Express, Kimkinyona Fox, Marketing Manager, HyperTransport, PLX Technology Inc., Sunnyvale, Calif. , EE Times
October 23, 2002 (10:02 a.m. EST)
URL: http://www.eetimes.com/story/OEG20021023S0011
The proliferation of new interconnect technologies has some designers scratching their heads, many believing that there are no significant differences among the alternatives, and that all compete for the same applications. The fact, however, is that many of these interconnects have different system roles to play and are optimized for the needs of certain parts of a system hierarchy.
While HyperTransport and PCI Express, for example, have a number of similarities, a look at their differences and areas of momentum reveal that they perform complementary roles within the same system. HyperTransport, on the one hand, has gained success as a processor connection for AMD and MIPS-based embedded CPUs, while PCI Express is expected to be the next generation mainstream local and backplane interconnection for a wide variety of market segments.
HyperTransport and PCI Express have a number of characteristics in common. Both are based on point-to-poi nt, dual, unidirectional LVDS connections. This approach has become common in a wide range of state-of-the-art interconnection technologies, including other complementary standards such as InfiniBand and RapidIO. Both HyperTransport and PCI Express provide a high degree of compatibility with PCI from a software point of view, and both offer a flexible, high-performance connection.
However, their differences, actually provide more insight into their intended uses, and how they are expected to penetrate the market. HyperTransport is, at its root, a parallel standard with separate clock and data lines. Currently, the processors that use this standard as their interconnection technology offer an 8-bit version, but it is expected that they will quickly move to the 16-bit version, since the path to the processor in any system is often a bottleneck to overall system performance.
On the other hand, PCI Express was designed from the start to be a serial technology, with the basic connection entity a single bidirectional lane. There are no sideband signals in PCI Express, as even the clock is part of the basic link. Adding bandwidth is accomplished by duplicating the serial links in multiples of x2, x4, x8, x12, x16, and x32. This allows a backplane designer the ability to scale a system in a flexible, low-skew, low pin count manner.
Also, the topology that each standard expects for its usage model provides a guideline to its position in the system. HyperTransport was originally defined as a host-centric, daisy-chained topology, with all traffic flowing from peripherals to the central CPU and back. In the near future, the specification will be expanded to allow a switch-based topology, which is most efficient for connection of multiple processors.
PCI Express was designed to be switch-based, with both host-centric and peer-to-peer traffic identified as targets in the original planning. The core specification provides for the basic topology, indicating that the expected systems wo uld be switch-based though mesh and hybrid topologies are also possible. The Advanced Switching addition to the PCI Express spec adds a range of features, including a standardized method for providing peer-to-peer transfers.
Engineers at PLX Technology have determined that PCI Express will be the next generation local and backplane interconnection standard for a wide variety of markets, including servers, embedded, and communications. The combination of backward compatibility with a vast PCI software infrastructure, high bandwidth, low pin count, elegant scalability, quality of service capability, and strong industry momentum help it succeed in this role.
Initial silicon from PLX and other companies is expected to appear in 2003, and systems based upon the PCI Express standard will be in the marketplace starting in 2004.
Backward compatibility
The software backward compatibility of PCI Express is a key industry enabler, and is one major reason that we determined t hat this was to be our future direction. This is especially important when comparing PCI Express against other potentially competing backplane technologies such as CompactPCI backplane based on Ethernet. If you are already starting from a PCI system, you benefit from the PCI Express switch-based architecture, the LVDS point-to-point technology, and the scalability, while still making use of your existing software base.
To switch to another technology for the backplane entails a significant software change. And, as resistant as customers are to hardware changes, they are often completely unreceptive to software changes. Even after making the software changes to accommodate a competing standard, they are left with a system that is either similar or inferior to PCI Express from a technical point of view in terms of performance, scalability, or features.
On the performance front, PCI Express outperforms the alternatives, with a starting raw bandwidth of 2.5 Gbit/second per lane per direction. This can be scaled easily, providing a maximum full duplex bandwidth of 128 Gbit/sec after factoring in the embedded clock overhead. It is expected that follow-on versions of the specification will make use of double or even higher clock rates as affordable higher-performance serdes technology inevitably appears in a few years.
In terms of features, PCI Express provides a robust set of capabilities to build a wide variety of systems. The core specification offers quality of service, data integrity, and hot plug support. The Advanced Switching (AS) specification overlays a rich set of additional capabilities that are especially important to the communications market, such as a flat global addressing scheme, multi-host failover, standardized peer-to-peer transfers, and the ability to build systems that mix core and AS endpoints in any combination. This last feature will be especially important as people migrate from their current systems to AS systems, especially if they want to use much of their legacy PCI-based silicon.
HyperTransport has the two key characteristics important to a processor interconnect high bandwidth and low latency. Out of the gate, it is capable of handling up to 1.6 Gbit/second per differential pair, achieving up to 12.8 Gbyte/sec in its maximum 32-bit configuration. That is well beyond what is possible with previous bus-based microprocessor interfaces, such SysAD for MIPS processors and MPX for Motorola processors. And because HyperTransport is a point-to-point link, rather than a shared bus topology, overall latency is significantly reduced.
In addition, HyperTransport is designed to scale for cost and performance. HyperTransport supports up to 800 MHz transfer rates as well as up to 32-bit link widths for maximizing processor I/O data transmissions. In addition, a double-hosted chain topology is supported to further scale performance by offloading data processing tasks to a slave device. And for those processor applications that are cost and pow er sensitive, the link width, pin count, and transfer rates are all adjustable down to the requirements of the system. This flexibility has the added benefit of decreased system complexity and ease of board routing.
As for momentum, HyperTransport has the advantage of being available on processors here and now. HyperTransport has become the processor interconnection point for AMD, many MIPS-based processors, as well as some graphics and security processors, and there are plans to extend it further.
The HyperTransport Specification 1.05, due out in the second half of 2002, will define a HyperTransport switch to allow more powerful multi-processing capability. In addition, the upcoming HyperTransport Specification 2.0, will define a new physical layer that provides a targeted speed increase of double the current 1.6 GTransfers/sec rate.
Thus, HyperTransport is expected to continue its role as a high-performance CPU interface, and PCI Express will standardize the local and backplan e interconnect landscape. It is expected that many systems will integrate both HyperTransport and PCI Express, bringing the best of both interfaces to bear in their appropriate places in the system hierarchy of interconnects.
Related Semiconductor IP
- AES GCM IP Core
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
- High Speed Ethernet 2/4/8-Lane 200G/400G PCS
Related White Papers
- Advanced switching boosts PCI Express
- Compatibility issue slows PCI Express
- With StarFabric as an on-ramp, the PCI Express Advanced Switching is ready
- Verifying PCI Express design IP
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference