Power Management of PCIe PIPE Interface
Lately we have seen a trend of serial data transfers in place of parallel data transfer for improved performance and data integrity. One example of this is the migration from PCI/PCI-X to PCI Express. A serial interface between two devices results in fewer number of pins per device package. This not only results in reduced chip and board design cost but also reduces board design complexity. As serial links can be clocked considerably faster than parallel links, they would be highly scalable in terms of performance.
However, to accelerate verification of PCI Express based sub-systems and to accelerate the PCI Express endpoint development time , PIPE (PHY Interface for the PCI Express Architecture) was defined by Intel and was published for industry review in 2002. PIPE is a standard interface defined between a PHY sub-layer which handles the lower levels of serial signaling and the Media Access Layer (MAC) which handles addressing/access control mechanisms. The following diagram illustrates the role PIPE plays in partitioning the PHY layer for PCI Express.
To read the full article, click here
Related Semiconductor IP
- Flexible Pixel Processor Video IP
- Bluetooth Low Energy 6.0 Digital IP
- MIPI SWI3S Manager Core IP
- Ultra-low power high dynamic range image sensor
- Neural Video Processor IP
Related Blogs
- Synopsys Cloud: The Power of Automated License Management
- Intel’s Atom-based Tunnel Creek SOC with integrated PCIe interface opens new era for embedded developers
- PCI Express takes on Apple/Intel Thunderbolt and 16 Gtransfers/sec at PCI SIG while PCIe Gen 3 starts to power up
- Media Tablet & Smartphones to generate $6 Billion market in... power management IC segment by 2012, says IPnest
Latest Blogs
- Breaking the Silence: What Is SoundWire‑I3S and Why It Matters
- What It Will Take to Build a Resilient Automotive Compute Ecosystem
- The Blind Spot of Semiconductor IP Sales
- Scalable I/O Virtualization: A Deep Dive into PCIe’s Next Gen Virtualization
- UEC-LLR: The Future of Loss Recovery in Ethernet for AI and HPC