Making Better Front-End Architectural Choices Avoids Back-End Timing Closure Issues
By Charlie Janac – President and CEO of Arteris
Today’s SoC architectures are so complex that they are creating rifts between design teams in any given project. For example, when architects decide on the functionality and the underlying data flow of a design during the front-end of the chip design process, they often have little preconception of the myriad timing closure challenges that get handed to synthesis place and route teams during the back-end of the processes. This problem has added months to the chip design schedule, especially as process technology migrates to smaller geometries like 16/14 nanometer FinFET technology. A process that used to take only weeks is now contributing to significant schedule slips. In some cases, entire chip projects are at risk of getting canceled because timing can’t be closed.
Bad News: FinFET-type processes present greater timing closure challenges because timing signals travel greater distances within a design since densities are greater. In addition, voltage thresholds are lower and operating frequencies are typically higher. But there is now good news for designers on both the front-end of the process and the back-end. Using network-on-chip technology, predicting timing closure problems during architectural data flow and system functionality step in the front-end, is helping designers avoid timing closure challenges in the back-end place-and-route process. This predictive process is helping SoC design teams to avoid schedule slips and get to the market sooner.
When the 40nm process node was the most advanced available chip, architects would usually draw an SoC floor plan on a piece of paper in the early design stages and then leave the physical constraints to the layout group on the back end of the process. After the 40nm SoC generation, life got harder due to timing closure issues - as well as a myriad of her challenges.
But the 28nm generation brought the use of network-on-chip technology more prominently into the design flow. The most important part of the SoC affecting timing closure is the interconnect -because it contains a majority of SoC wires which link to all major parts of the chip, and also because it spans across the entire chip. Network-on-chip (NoC) interconnect technology introduced packetized transport communications inside the chip, and more importantly, it enabled the interconnect to be isolated from other IP blocks in the chip by using Network Interface Units (NIU) at the edge of the interconnect, which are adjacent to their respective IP blocks with which they interface with each IP block. This set of NoC interconnect capabilities allows automated interconnect timing closure to be performed in the interconnect before design engineers attempt to close timing for the entire SoC.
Timing Closure and Pipelines
Timing closure issues occur if it takes a signal more than one clock cycle to move across a physical connection from its initiator IP to its target IP. If this happens pipelines or repeaters need to be inserted in order to maintain target frequency. By inserting the correct pipelines in the right places, we manage to close timing.
Traditionally pipelines are added manually by the interconnect RTL team which is a time-consuming process that is prone to errors. Furthermore, the interconnect IP changes faster than the timing can be closed due to engineering change orders (ECOs), so the manual timing closure scheme must be over-engineered to anticipate the evolution of the SoC. A large SoC can have over 6,000 pipeline choices with 1-9 pipeline configuration choices which results in 6,000 factorial combinations. There may be up to 60 timing parameters to set as well in a complex SoC. This is a level of complexity that is too difficult to handle with manual methods. Attempts to do so can also result in schedule slips.
However, today timing closure can be automated using a combination of NoC interconnect RTL and NoC-specific physical awareness tools. Such tools can estimate frequency of timing closure at the architectural / RTL level and can ease and automate the timing closure process at the latter place and route level. Using these tools can improve SoC schedule predictability and optimize interconnect area, power and latency.
Find and Fix Timing Closure Issues at the Earliest Stages of Chip Design
In a complex, sequential process such as SoC design, problems that are addressed early are less costly to resolve than problems addressed later. Therefore, it is best to fix potential timing closure problems during the earlier SoC architecture phase rather than during later RTL development, or place-and-route phases. Design teams that leave the timing closure task until the place and route (P&R) phase of a complex SoC are exposing their project to the risk of having to perform several days- or weeks-long P&R iterations. These steps add cost and schedules slips, and can cause the project to miss a critical market window -which negates any early market profits and market share momentum.
Designers and architects that want to adopt new methods into their design flow to avoid timing closure challenges in the back-end should evaluate three capabilities:
- Automatic generation of a meta floorplan in the front end based on the list of IPs and the parameters of their IP connectors. This provides the knowledge IP connector/socket location and therefore of interconnect link distance;
- Automatic placement of the interconnect IP RTL based upon IP block socket locations from the meta floorplan;
- Ability to automatically turn pipelines on to achieve timing closure.
These capabilities have been successfully implemented by chip design teams. They are helping architects, back-end place and route teams avoid the timing closure challenges that delay product introduction. Creating chip architectures with the knowledge of their effects on timing closure promises to enable our industry to design more complex SoCs while reducing the risk of schedule slips due to back-end timing closure issues that are uncovered at the latest stages of the design flow.
About the Author:
K. Charles Janac is chairman, president and chief executive officer of Arteris. Over 20 years of his career, he has worked in multiple industries including electronic design automation, semiconductor capital equipment, nano-technology, industrial polymers and venture capital.
Related Semiconductor IP
- AES GCM IP Core
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
- High Speed Ethernet 2/4/8-Lane 200G/400G PCS
Related White Papers
- Overcoming Timing Closure Issues in Wide Interface DDR, HBM and ONFI Subsystems
- SOC: Submicron Issues -> Noise awareness catches timing flaws
- FPGA prototyping of complex SoCs: Partitioning and Timing Closure Challenges with Solutions
- Timing Closure on FPGAs
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference