Survey of Chip Designers on the Value of Formal Verification Across the Spectrum of Applications
Lili Bowers, Marketing Assistant, Jasper Design Automation
Norris IP, Director of Engineering, Jasper Design Automation
Over 50 engineers and engineering managers were surveyed at DAC 2009 by Jasper Design Automation as part of a market research and analysis program examining how designers use formal verification across the design cycle. Within eight application areas, respondents indicated which formal technology applications are most interesting and valuable, all the way down to each detailed engineering task within the application areas, and their hierarchy of value. The set of applications used in the survey are current uses of JasperGold and ActiveDesign, as developed by Jasper Design Automation and its customers.
A note on methodology: the respondents were asked to rank each application area subtopic on a scale of importance. Answered were ranked 0 points (no perceived utility), 1 point (low), 2 points (medium) and 3 points (high). The graphs in this paper all exhibit this formula. Furthermore, the values for low, medium and high are color coded for further depth of analysis.
The graph shows all eight of the major applications:
- Architectural verification
- RTL block verification
- RTL development
- Protocol certification
- Design and IP leverage
- Low-power verification
- SOC integration
- Post-silicon debug
You can see on this graph that the overall, highest value application for formal verification was voted to be RTL block verification, followed by promoting design and IP leverage, and by postsilicon debug using formal. Other contenders were architectural analysis and verification, and chip integration tasks.
The graph shows by color, the contributions from votes for high, medium and low
Now let's look more closely and in detail at the architectural verification application. Architectural formal verification involves verifying the design architecture described in System Verilog language against the set of specifications. Typically, this takes place prior to any RTL coding stage. Uncovering architecture-related problems in the early stage prevents costly downstream re-designs. For example, it’s practically infeasible to get sufficient coverage at the design level for cachecoherence protocol checking. There are four main categories of architectural verification:
-
Communication protocol: A communications protocol is the set of standard rules for data representation, signaling, authentication and error detection required to send information over a communications channel. Formal property language provides a precise medium for capturing these rules, and formal verification can verify the soundness and completeness of the rules, and expose subtle synchronization flaw in the protocol.
-
Cache coherency protocol: A cache-coherence protocol contains complex interactions with parallelism and multi-threading. To verify a cache-coherence protocol, a tool must consider a range of traces that are both wide (in terms of starting and branching points) and deep (with long sequences of events). That’s hard enough with an architectural model and infeasible with a design level model. With complex protocols, it becomes extremely difficult to determine which corner cases are possible and how they might be manifested. Formal verification is well suited at discovering these corner cases in the protocol, and can provide architects with an executable specification that can be queried.
-
Architectural liveness: Architectural verification is on critical path because the consequences of an undetected architectural bug are so severe. RTL verification won’t catch such a bug; it will simply show that the RTL code matches the architectural spec. An architectural specification bug may well remain undetected until silicon, causing time-to-market delays, redesign, and possibly a silicon respin. Furthermore, liveness verification, i.e. checking that something defined as “good” eventually happens according to the architecture, is also extremely important.
-
Power architecture: Low-power designs often utilitize CPF or UPF to specify the desired power architecture, providing an easy-to-use, easy-to-modify specification of power intent throughout the flow: design, verification, and implementation. Such information, plus temporal behavior controlled by the power management unit, can be converted into appropriate properties for formal verification, allowing automation and exhaustive analysis, and debug of the power architecture.
As the following graph shows, the highest value application is verifying communications protocols, followed by architectural liveness and verifying power architecture.
RTL block verification requires full proofs of critical functionality as the entire block or selected function is exhaustively verified. Doing so eliminates duplication by cutting down on block-level and chip-level simulation effort, and completely eliminates any corner-case issues for critical design functionality. There are five sub-applications:
-
Verifying critical functionality: Formal verification enables exhaustive verification of critical functionality, removing uncertainty on corner cases that traditional simulation with directed and constrained random tests may miss.
-
Protocol certification: Formal property languages can capture protocol rules precisely in an executable format, enabling protocol certification to confirm that a piece of RTL obeys the protocol rules in all cases.
-
Verifying token leakage: Token leakage is hard to detect in simulation, since its effect may not be apparent, manifesting only in performance degradation. By specify token leakage as an explicit specification-level property for formal verification, corner cases that result in token leakage can be determined and be fixed.
-
Verifying packet integrity: Packet integrity is a common specification for data transport design, and although it requires great capacity in a formal verification solution, this reduces the need to verify other implementation-level properties, and achieves a higher level of return-on-investment on verification effort.
-
Block-level simulation replacement: Simulation is quite time-consuming, creating blocklevel test benches for simulation is tedious, and maintaining them is even harder as blocks evolve. Formal verification is much faster than simulation, obviates the overhead of creating and maintaining block-level testbenches, and evolves with RTL changes, so it is a natural choice to replace simulation.
Of these applications, the application with the highest number of value votes is verifying critical functionality, followed by using formal for block level simulation replacement, and protocol certification, which in turn are followed closely by packet integrity verification.
The following three designer-oriented tasks improve baseline RTL block quality by establishing basic functional sanity; identifying sources of “x” and eliminating undesired propagation; and verifying register operations. Together, they reduce overall engineering effort by allowing RTL designers to debug their own code, reducing downstream verification effort.
-
Incremental RTL development and verification: RTL is written incrementally, and ideally design changes could be effectively tracked and verified incrementally, guaranteeing against unintended side-effects of changes. The cost of uncovering a design bug increases non-linearly down the design flow. Formal verification can start with partial RTL, before testbenches are available, can aid RTL development productivity, and can effectively and incrementally verify design changes.
-
Automatic register verification: Control and status registers (CSR) are the interface between software and hardware. Register verification is extremely important because a wrongly implemented CSR always breaks downstream functionality, and an implementation bug can make debugging a functional failure difficult. Formal technology converts the CSR specification is into a set of properties capturing the conditions in which the control registers can be configured, the conditions in which the control registers should maintain stable values, and the conditions in which the status registers should receive a specified value. Formal verification can then verify that these conditions are met in all cases, and simplify the debugging process of other failures.
-
X-propagation detection: Certain coding styles may require designers to assign X (unknown) in RTL, and yet, if not treated properly, RTL code and gate level simulation can potentially mismatch. Unlike traditional simulation tests, which do not guarantee coverage for Xs, formal verification can exhaustively verify X propagation, proving that X's are not propagated to critical areas, and are blocked by appropriate blocking conditions.
As we can see from this graph, the most compelling return on investment is obtained for incremental RTL development and verification, followed by X- propagation detection and then by automated register verification.
Protocol certification ensures that individual components on a chip-level bus can function correctly.
System-level busses linking multiple components display an enormous number of possible configurations and it’s infeasible to adequately verify them using system-level simulation alone.
This method is a divide-and-conquer approach to verify components individually yet exhaustively.
The general area of protocol certification encompasses:
-
Both master / slave configuration and verification: Formal property languages can capture protocol rules precisely in an executable format. Using the assume-guarantee approach of arranging the resulting properties, the same description can be applied to both masters and slaves of a specific protocol. Formal verification of both masters and slaves using the same set of protocol properties ensures the masters and slaves interact with one another properly under all legal operating conditions.
-
Verifying standard protocols such as AHP, AXI, etc.: Formal property verification can capture protocol rules precisely in an executable format and confirm that a block of RTL obeys the protocol rules in all cases. For standard protocols, it is especially high on return on-investment, since standard protocol properties which have been already applied to numerous RTL development projects may be commercially available.
-
Verifying proprietary protocols that may be involved in the design: As with standard protocols, for proprietary protocols, formal verification is a great way to ensure correctness, document and maintain consistency among different projects, avoid the potential confusion and imprecision of a paper document. Project teams can rapidly learn a new protocol and compare changes between revisions.
As we can see, verification of proprietary protocols is the top application, followed by verification of standard protocols, followed by master / slave configuration verification, which can involve a lot of dependencies and complexity.
Design and IP leverage is constituted by five different categories:
-
Design exploration and comprehension of designs: Traditional simulation is limited for exploration and comprehension of an unfamiliar design, since the user is required to figure out how to trigger a scenario by manipulating the inputs. Formal verification technology applied to scenarios being investigated, along with visualization, advanced waveform generation and debugging features, provides an ideal environment to explore and comprehend a design.
-
Targeted configuration analysis: A reusable design is typically highly configurable, and yet, it is difficult to verify the design for all configurable options. Formal verification applied with visualization technology can generate and annotate complex waveforms with the exercised behaviors, answering questions like "How do I program the design into the target state?" and "What target state will the design gets into if I execute this programming sequence?"
-
Modifying existing designs for design reuse efficiency: Leveraging an existing IP block often involves some changes and complete re-verification, reducing reuse efficiency. Focused re-verification only of the features related to the modification made to the RTL is desirable.
-
Promoting knowledge transfer and delivery: A static document is difficult to keep up-to-date as the design evolves, and waveforms captured in the screenshots may not reflect end user needs. Some formal-based solutions capture design knowledge in an executable specification with the ability to regenerate, annotate, and customize waveforms from the latest RTL.
-
Design and IP leverage and deployment: Designers’ time is expensive, yet design verification and design reuses often relies on RTL knowledge only found in the head of the original designer. Since the RTL is the authority of what it can and cannot do, tools can extract the answer to any question directly from the RTL, instead of letting the original designer answer the same questions over and over again for multiple design projects reusing the same block.
The respondents in the survey have voted that formal technology has a high ROI for knowledge transfer and delivery, as well as the ability to explore and comprehend legacy IP, and being able to efficiently modify these existing designs.
These applications are the clear key winners in this category and are moving into more mainstream use today due to the emergence of new formal verification solutions for the design community.
Low-power verification requires designers to exhaustively verify power up/down sequences, proper state-saving and restoring steps, as well as data integrity during state changes. This prevents power problems both structural (demanding respin) and temporal (violating the power spec and possibly requiring respin).
-
Verifying power domains and modal operation: One of the most effective techniques in power management is power shut-off (PSO), which switches off power to parts of the chip when not in use. Formal verification can be used to specify the situations in which various domains should be switched off, to confirm that no activities are generated when switched off, and that the chip works correctly with every combination of PSO for various domains.
-
Verifying state and sequence interactions for power architectures: Similar to modal operation, allows formal verification to confirm that state retention of key state elements performs properly, and the system can recover after powering up and down the various domains.
-
Full frequency / phase jitter: Clock tree optimization and clock gating, with possible asynchronous clock domain crossing, are typical in lower power architecture, since clock trees are large source of dynamic power. Because of the tricky timing to trigger frequency jitter and phase jitter, simulation typically cannot be relied upon for detection of function errors due to jitter. Formal technology can efficiently verify end-to-end properties in the presence of frequency jitter and phase jitter.
Based on this survey, the highest value is derived from formal verification in verifying power domains and modal operation, as more and more designs become power-sensitive. The number of power domains and the number of different power modes is increasing, and despite standards like UPF and CPF, forging a new power analysis and verification path using formal certainly solves many problems. In this regard, the second most popular application is verifying state and sequence interactions for power architectures, which is similar to modal operations.
In the area of SOC integration there are three key items:
- Chip level connectivity checking
- Automated pad-ring verification
- Multi-cycle path generation
A pre-SoC integration step establishes the connectivity of pins across subsystems and block levels. Connectivity issues are one of the primary factors in the delay in chip-level integration. Establishing connectivity helps the system-level verification but is time consuming with conventional methods. Formal technology can bring a great deal of automation in problem formation, analysis and debugging.
-
Chip level connectivity checking: During SoC integration, establishing the connectivity of pins across sub-systems and blocks is a necessary task, and involves functional signals and busses, general purpose IO (GPIO) pins and pads. Verification of connectivity should be done at an early stage of integration to avoid tedious debugging should connectivity problems cause functional verification to fail. Formal verification brings automation and exhaustiveness to the problem of connectivity formulation, analysis, and debugging.
-
Automated pad-ring verification: During SoC integration, the connectivity of pins across sub-systems and blocks includes configuration of the pad-ring, which is often modified as the configurations of the chip increase. By capturing the intended connection in each configuration and utilizing formal technology for pad-ring verification, new configurations can easily be added. Since formal verification exhaustively explores all possibilities, it will identify corner cases when a connection should not exist, while simulation often won't.
-
Multi-cycle path generation: During static timing analysis, RTL designers can specify multi-cycle paths to allow more time for particular paths. Any mistake in such commands can lead to a timing problem in silicon. Since these commands are mostly human-written, they are error-prone, and there is no good way to verify these commands with simulation. Formal technology with waveform generation helps to comprehend activities along specified multi-cycle paths, and also to verify that the specified cycle bounds for the multi-cycle paths are correct.
For chip integration, the vast majority of respondents found great utility in multi-cycle path generation and chip level conductivity checking, followed by automated pad-ring verification.
During post-silicon debug, formal verification is viewed as important to rapidly validate fixes, as well as to isolate the root cause for silicon bugs, and prevent expensive serial bugs where the debug team is iterating and iterating in a loop without the proper root cause analysis.
Often, problems in silicon are hard to reproduce in simulation and extra effort is spent in trial and error methods of capturing the right input sequence. Formal technology can quickly eliminate incorrect hypotheses and is capable of locating the root cause of the bug. By appropriately leveraging formal technology, along with simulation methods of post-silicon debugging, the entire process can be expedited. Once the bug is root caused and appropriate fix is made, formal technology can establish that the fix indeed eliminates the problem and has not introduced any new design error.
-
Isolate the root cause for silicon bugs: In many post-silicon debug situations, the team has some ability to extract a trace of what went wrong in the chip when it failed, but this trace is often severely limited. So the team is in the situation that they can identify some wrong behavior at the output of the chip, but do not understand the triggering event for this incorrect behavior. With formal technology, the process of isolating the root cause for the silicon bug is much more predictable than simulation, because of the capability to find counter-examples very rapidly.
-
Prevent expensive serial bugs: Debug teams may iterate endlessly in a loop, without proper root cause analysis. Validation of bug fixes is critical, since attempting to cure the symptom without understanding the root cause leads only to another, similar bug in the silicon respin. Formal verification, with its exhaustive nature, is well-suited to validate a fix and make sure no other corner case will trigger a similar failure.
-
Rapidly validate fixes: A silicon bug may be fixed in many ways, sometimes without a respin. Yet, it is tedious and difficult to explore the effect of a proposed fix in simulation. Using formal verification, a fix can be validated quickly, due to the exhaustiveness of the technology. For example, it may confirm that a software fix is possible by limiting the number of outstanding transactions to a smaller limit, regardless of the operation conditions. (Simulation can only indicate the lack of a failure for the current trace.)
In conclusion, based on a sample of over 50 engineers and engineering managers surveyed in June 2009, the findings on the motivation to use formal verification can be summarized. For targeted ROI across the design cycle, RTL block verification; design and IP leverage; RTL development; and silicon debug are the top four important applications. These are followed by architectural analysis early in the design cycle, chip integration, and low-power design verification.
All of these applications as described in the survey are current uses of Jasper Design Automation solutions.
Related Semiconductor IP
- AES GCM IP Core
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
- High Speed Ethernet 2/4/8-Lane 200G/400G PCS
Related White Papers
- Survey compares formal verification tools
- Formal, simulation, and AMBA verification IP combine to verify configurable powerline networking SoC
- How formal verification saves time in digital IP design
- Don't over-constrain in formal property verification (FPV) flows
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference