Common physical layer issues underlie new I/O standards

EETimes

Common physical layer issues underlie new I/O standards
By Kevin Donnelly, Vice President, Network Communications Division, Rambus Inc., Los Altos, Calif., EE Times
October 28, 2002 (10:13 a.m. EST)
URL: http://www.eetimes.com/story/OEG20021023S0014

The challenge before the industry is to sort through the various emerging I/O standards. To a large extent, the potential performance and cost of a system are determined at the physical layer of the interface. Thus, system architects must understand the tradeoffs between various parallel and serial interconnects.

Parallel buses incur minimal latency. At the physical interface of the parallel bus, data is instantly available on each clock edge for load-store applications. The data is available to the control functions inside the processor without going through serialization conversions or decoding.

But this "instant data" comes with a system-design cost. The many data lines of the parallel bus must have traces matched in length and matched with the clock signal(s) to minimize any skew. The lower the skew, the faster the possible data transfer rate. This trace-matching requires extra PCB real estate and can require extra board layer s.

If required performance can be met with fewer pins, then the component costs, the board layout and the number of board layers can be reduced. Serial links are able to support more than three gigabits/second, across 20 inches of board and two connectors, and thus have become suitable for lowering the cost of board-to-board and chip-to-module connections.

However, a disadvantage of serial links is the die area and additional latency required for serializing/deserializing, encoding/decoding, and clock recovery of the data stream, which makes serial links unsuitable for certain low-latency applications.

Another cost factor is availability and manufacturability. To date, serial-link PHYs have been regarded as difficult to implement — requiring mixed signal expertise, tuned IC processes and special care during the silicon design flow. For these new serial-link interfaces to be adopted in high-volume PC applications, they must be widely available in foundries using standard pr ocesses, chip packages and board designs.

The serial-link analog core must be supported in standard libraries, and their incorporation into ASIC and ASSP design flows must be seamless. In addition, the serial-link core must have a robust design, resulting in a high-yield, easily manufactured product that is interoperable with a wide variety of companion devices. These requirements must be met in order for serial link-based standards to be successfully adopted in volume.

Let's briefly review the terrain of new interconnects. Packet-switched RapidIO specifies by 8 and by 16 buses, and achieves up to 1.25 Gigabits per second per pin using LVDS signaling levels. It is primarily intended for control plane connections in communications/networking systems, and for processor-processor connections in DSP farms. This summer, the RapidIO Trade Association released the specification for Serial RapidIO to address longer backplane channels, while maintaining software compatibility with parallel Rapid IO. Each of the Serial RapidIO lanes provides transfer rates of 1, 2 or 2.5 Gbit/second per lane.

The HyperTransport standard is aimed at processor-to-bridge, bridge-to-bridge and processor-to-coprocessor I/O on the motherboard. Its specification provides for bus widths of 2-, 4-, 8-, 16- and 32-bits; LVDS signaling; and clock rates from 200 MHz to 1 GHz. This yields per-pin transfer rates from 400 Mbit/second to 1.6 Gbit/second.

Typical system architectures have a combination of serial and parallel interconnects. Serial links can support up to three gigabits per second of data and simplify board-to-board and chip-to-module connections but come at the additional cost of increased on-chip processing. There is still a critical role for parallel interconnects which, while more compl ex, do not have the latency penalty.
Source: Rambus Inc.

Within PCs, hard disk drives have generally been connected to the motherboard using the AT-Attach (ATA) standard. ATA-6 provides access to storage devices via a 100 MHz DDR clock and TTL/CMOS data levels.

Starting early next year, disk drives will use the new Serial ATA interface, which at 1.5 Gbit/second per channel supports 50 percent higher bandwidth than ATA-6, and is intended to scale to 6 Gbit/second by the year 2007. The parallel ATA ribbon cable and its associated 40-pin connectors will be replaced with a much slimmer cable that is easier to route within the system enclosure. The Serial ATA's low (250 mV) signal levels also help reduce I2R dissipation inside the box.

The latest PCI-X 2.0 specification was released this summer, and includes 32- and 64-data-bit bus widths, 266MHz and 533MHz double data-rate clocking, and 500Mbit/sec-per-pin transfer rates, using 3.3-V CMOS signal levels.< /p>

Express timing
Starting in 2004, the serial link-based PCI Express will be deployed as a replacement to today's PCI bus in chip-to-module, and board-to-board and backplane connections. The PCI Express specification defines a raw data rate of 2.5 Gbit/sec — yielding 2 Gbit/sec effective data rate after coding. The PCI Express roadmap anticipates up to 32-lane wide interfaces and faster (5.0 Gbit/sec) connections.

Servers and network equipment are using interconnect defined by the Ethernet working groups for chip-to-module and board-to-board interconnect. The 10G Ethernet Task Force defined the XAUI serial-link interface for module and board connections in 10G Ethernet systems. XAUI supports 3.125 Gbit/sec per pin raw data rate--yielding 2.5 Gbit/sec after coding — on four transmit and four receive lanes. Since it is defined to drive 20 inches over FR4-based boards with two connectors, XAUI links are starting to be used for backplane connections.

The InfiniBa nd switched-fabric architecture is targeted to connected server clusters and server blades in data centers. It supports a 2.5Gbit/sec wire-speed connection — yielding 2 Gbit/sec after coding — with 1-, 2- or up to 12-wire link widths, over copper, fiber and cable connections. InfiniBand's 17-meter cable connections may prove very attractive for data-center connections.

Fibre Channel has been defined for storage connections, and is most often used in an arbitrated loop topology to interconnect network storage ports and clusters. The standard supports bandwidths of 1.06 Gbit/sec and 2.12 Gbit/sec at distances of up to 10 kilometers over optical fiber media. Newer proposals include 4.24 Gbit/sec and 10Gbit/sec (4 x 3.18 Gbit/sec channels) Fibre Channel links.

The proliferation of interconnection standards has been driven primarily by the need for different logical layers to address specific application requirements. However, at the physical layer, all of the I/O standards can be generally grouped into either a parallel bus or a serial-link physical layer. Within these groupings, there is much commonality in the physical layer specifications.

For mass adoption, these high-speed physical I/Os must be prepared for fast design cycles and high-volume manufacturing. To that end, the serial link PHY definitions have coalesced around a common set of specifications, This has allowed the emergence of analog core PHYs that can address multiple standards and that can be dropped in the standard ASIC and ASSP design flows. The availability of these cores is helping to enable the adoption of serial link PHYs in volume PC applications. The industry is forming an infrastructure to provide robust, manufacturable serial link cores to meet the needs of the design community.

×
Semiconductor IP