Enabling Robust and Flexible SOC Designs with AXI to PCIe Bridge Solutions

By Stéphane Hauradou
Co-founder and CTO
PLDA

A bridge between two standard protocols is an attractive building block for system designers. When designing an application around a standard protocol, a bridge to another protocol enables all of the benefits of that second system with a less-intensive design-process.

One such bridge design being considered and implemented in volume today is the bridge between PCI Express and AXI protocols. PCI Express application designers are finding that this bridge offers easy implementation of its application on any AMBA-based SOC. Additionally, SOC designers are using these bridges to seamlessly communicate with PCI Express, allowing easy interface with I/O protocols either off-chip or off-board, further extending memory and computing capabilities.

Building such a bridge solution is extremely attractive, but can be a daunting task. Both PCI Express and AXI utilize advanced protocol concepts for targeting high performance, high frequency system design, enabling high effective data rate transfers. However, PCI Express utilizes a packet-based layered protocol for effectively using differential pair signalling technology, and AXI uses parallel channels with flexible relative timing between them for supporting high speed on chip transfers. Many designers are turning to third-party solutions to maximize the points of commonality while using advanced design to minimize issues and reduce the time, cost and effort required to implement the bridge.

AXI-PCIe Bridge Design: Points of Commonality and Design Challenges

PCI Express and AXI protocols share some key mechanisms, making them a natural fit for bridging technologies:

  • Transfer sizes: A maximal AXI transfer of 1024 bits corresponds to a 128 Bytes Payload transfer of PCI Express. AXI supports 32 and 64 bit burst transfers, which corresponds to typical PCI Express ports’ data paths.
  • Symmetric protocol: Both protocols can transfer packets on one direction while simultaneously receiving packets on the other direction.
  • Outstanding requests: Both protocols are designed for issuing packet transfers before termination of the previously issued transfers. In low latency systems this enables high effective data rate transfers.
  • Out of order transfers: Both protocols support out of order transaction completions. This feature enables fast responding target to respond before other targets.

Although AXI and PCI Express share major conceptual key points, the design of an efficient bridge between the two protocols is far from being a straight forward mapping. There are some key points of differentiation that require careful design to overcome. These include:

  • Transfer efficiency
  • Clock Domain Crossing
  • Low Power states
  • Interrupt mechanisms
  • Bridge Configuration

By re-using a third-party solution, SoC designers can be assured that the bridge will work as expected, the first time, minimizing design headaches. A qualified IP provider for an AXI-PCIe bridge product will review and solve these key issues.

Transfer efficiency:

While AXI protocol supports transmission of parallel Read and Write Requests, and Read Completion on the same direction, PCI Express protocol supports transmission of only a single type of packet in any given moment. The choice of what packet to transmit has direct impact on overall system performance. Certain elements should be taken into consideration:

  • Are there enough credits for transmitting a packet? If not what other packet could be transmitted? This information needs to be analysed dynamically to ensure steady throughput.

  • Is there enough buffer space for the returned Completions in case of a Read request?

  • How are different priorities allocated effectively between packets?

  • How can the system efficiently transfer a succession of different packets without introducing idle states on the PCI Express link?

  • How can low latency be assured on the AXI to PCI Express path, enabling the high data rates expected by the system?

On the other side of the equation, while a PCI Express port can receive a single packet in any given moment, the AXI destination might not be able to accept the received packet right away, or as quickly as the PCI Express port can transfer it. This raises similar key design questions:

  • How can the system initiate parallel transfers, utilizing the AXI capability of treating several channels in parallel? For instance, a slow Write request can be run in parallel with a fast completion channel.

  • While dealing with multiple channel transfers, how can the system guarantee the PCI Express ordering model for deadlock avoidance?

  • Is there enough buffer space for the returned Completions in case of a Read request?

  • How can the system efficiently transfer a succession of different packets without introducing idle states on the AXI interface?

  • How can low latency be assured on the AXI to PCI Express path, enabling the high data rates expected by the system?

Clock Domain Crossing:

Within the bridge design, it is most likely that the PCI Express and AXI controllers would be running in different clock rates. The AXI clock rate would be essentially derived by SOC system considerations for accommodating specific bandwidth requirements. The PCI Express clock rates are derived by physical characteristics of the PCI Express link (link width, SERDES configurations, and supported bit rates). With multiple supported bit rates (presented in PCI Express Gen2), the PCI Express clock rate is also bound to change dynamically while in functional operation mode.

A Clock Domain Crossing module is therefore essential in the design. A design of such Clock Domain Crossing should take into consideration the following questions:

  • Which point between the AXI interface and PCI Express transceivers would be less consuming in logic and design complexity for implementing a CDC module?

  • What point between the AXI interface and PCI Express transceivers should be chosen for running the maximum amount of design at its minimum required frequency? For example a 2.5 Gbps x1 PCI Express port using a 16 bit PIPE interface would run at 125 MHz.

  • How can you maximize performance and balance power needs? For example, a PCI Express port with a 32-bit data path running at 125 MHz would be actually running its higher protocol layers two times too fast, resulting in unnecessary power consumption.

Low Power states:

When the bridge is idle between transfers, automatic power saving mechanisms should be applied to maximize power efficiency and lower heat. Advanced low power features enabled in the PCIe prototcal such as clock removal and processor sleep procedures must be taken into account in the design.

Interrupt mechanisms:

A PCI Express to AXI bridge design should naturally support interrupt propagation between the protocols. Error scenarios of PCI Express should be propagated to the interrupt vector, along with power management events.

Bridge Configuration:

A vast number of parameters of a bridge should be easily configurable to meet system needs. These configurable parameters include buffer size parameters as credits per packet type, number of outstanding requests, max payload, address mapping values, low power parameters, and specific AXI and PCI Express protocol features.

Due to the nature of a PCI Express to AXI bridge between two standard protocols, it is should be an often re-used block. Having a number of key parameters that are user-configurable and controlled by software, is therefore, an important design consideration when creating or choosing a bridge solution.

In summary, while a PCI Express to AXI bridge module is a key enhancer to a PCI Express application design and can add considerable value to a SOC based design, designing such a bridge is a tricky task, which requires considerable effort and expertise. Choosing a third-party bridge solution can, therefore, reduce the time, cost and effort incurred by SOC designers while improving throughput and functionality in the system.

About the Author:

Co-founder and CTO of PLDA, the industry leader in the high-speed bus IP market, Stéphane Hauradou earned a Bachelor of Engineering from the Polytechnic School of Montreal and a Masters in Microelectronics from Sup'Telecom in Paris. His master’s thesis concentrated on the development of the first PCI IP controller for Programmable Logic Devices.
×
Semiconductor IP