Using Vera and Constrained-Random Verification to Improve DesignWare Core Quality

By Chris Rosebrugh, Director of Engineering
Synopsys

Introduction

As more system-on-chip (SoC) engineers rely on re-use to cut design time and reduce risk, the demand for synthesizable cores and other forms of intellectual property (IP) continues to rise dramatically. Not surprisingly, the successful usage of IP is closely correlated to IP quality. High quality demands the use of advanced tools and methods for functional verification to ensure that the IP works in all possible usage scenarios.

As the leading provider of standards-based interface IP, Synopsys constantly evaluates and evolves our processes for achieving the high quality that our customers expect. Of course, having ready access to the best suite of products in the EDA industry allows us to freely select those tools that support advanced methodologies. We’re not mandated to use specific Synopsys tools; we use the ones that improve our IP products to provide our customers the best possible design and verification reuse experience.

This article discusses our transition to the use of constrained-random stimulus generation in the development of our synthesizable interface cores, a key part of our DesignWare® IP product portfolio. We interacted with the Synopsys Verification Group much like one of their customers; we evaluated the product (Vera®) that we thought would help us, and then used it on an initial project with assistance from the same engineers that support verification customers. The result was a significant, measurable increase in the quality of our cores that directly benefited our IP customer base. Better verification also benefited our own development engineers, since they spend more time working on new projects than helping customers deal with issues in the current products.

IP Verification Background

Although constrained-random techniques have been around as a technique for a number of years now, many design teams did not embrace it until they crossed a complexity threshold due to the size or the type of designs they created. Until they made this leap, designers relied on hand-written, directed tests that provide explicit stimulus to the design inputs, run the design in simulation, and check its outputs against expected results. This approach, while manual and somewhat error-prone, provided adequate results for small, simple designs.

Given that most IP designs are well under 100,000 gates, most IP developers have relied on directed tests, perhaps supplemented by the limited pseudo-random capabilities achievable in Verilog and VHDL. This was the case early on for the cores in the Synopsys portfolio. Our thinking changed dramatically when we faced widespread deployment of a synthesizable core that implemented the USB 2.0 Host functionality.

Prior to some initial customer engagements, the USB 2.0 Host core had been verified by traditional methods— manual directed tests backed up by some random testing in Verilog—with effectiveness measured by traditional code-coverage metrics. The suite of 450 directed tests achieved what seemed to be reasonable coverage results:

  • 97.50% FSM coverage
  • 95.00% FSM transition coverage
  • 88.64% toggle coverage
  • 84.71% condition coverage
  • 98.23% line block coverage
  • 98.58% line statement coverage

Despite these good results, early reports from the field indicated that our initial customers were finding some corner-case problems that had not been caught by our fairly extensive test suite. We realized that the directed test approach did not scale with design complexity: although the USB 2.0 Host core was at most a few hundred thousand gates (depending upon configuration options), it contains extremely complex control logic with many combinations of conditions that are hard to set up with directed tests. We realized that we had to achieve more effective verification before we deployed this core to our broad customer base.

Transitioning to Vera and Constrained-Random Verification

We considered a number of possible ways to improve our IP verification process and decided to adopt Vera for the USB 2.0 Host core. Because constrained-random verification can automatically generate a large number of test cases within the parameters (constraints) specified by the verification team, it can hit corner cases that neither the design nor verification engineers would have ever anticipated. Without constrained-random stimulus, the bugs lurking in these corners hide until late in the development cycle, or aren’t found at all until customer usage.

In addition to its overall complexity, the USB 2.0 Host core had another aspect that made verification challenging. The functionality and performance of the core is application and configuration dependent, so even if there are versions successfully running in silicon, corner-case bugs could still emerge if the core is used in a different way in another application. By automatically generating constrained-random tests across all configurations, we could achieve much broader verification across its different operating modes.

Although we chose to use Vera first on the USB 2.0 Host core because of our quality concerns, we hoped that this approach would make verification more efficient and more thorough, and that we would apply it to other complex cores in our development pipeline. Accordingly, we decided to carefully track the time, resources, and results for our first application of Vera to determine the return on investment for the constrained-random methodology.

Before we started this effort, we had to convince our management to fund the initial development of the verification environment. With traditional directed tests and a fairly simple testbench, engineers can start finding bugs in simulation almost immediately. This satisfies management and others who are responsible for schedules, shipping, and marketing. Unfortunately, the core may also be shipped with corner-case functional bugs that cause customers time and effort.

With the constrained-random approach, some time is needed to set up the testbench environment. However, once Vera starts generating tests, the results quickly surpass those of directed tests. We made the case to our management that although this method wouldn’t produce bug results right away, the quality and number of bugs found and fixed would result in a better IP product that would satisfy customers and reduce the amount of time that Synopsys engineers would have to spend at customer sites resolving issues. We also knew that, as we developed verification IP for the protocols we support, the ramp-up time for each new core would decrease.

Hardware Verification Process

During the development of the constrained- random environment, but before we started running complete simulation tests, we procured an industry-standard USB hardware test suite. This suite contained 78 tests, 72 of which were applicable to our USB 2.0 Host. If there were no failures, these took 18 days to run back to back on an FPGA implementation of our core. The tests covered boundary conditions around all USB Host protocols. This process uncovered about 25 bugs in the design quite quickly, and our management asked us why we didn’t simply rely on hardware tests to speed up the verification process.

We pointed out that this test suite did not cover some critical functions such as error injection. This is a common limitation of industry compliance tests, since it’s difficult to do this in hardware. Regardless of what is covered in hardware tests, it’s very hard to debug failures when tests are running inside of an FPGA. Visibility into internal signals and states is severely limited; it took us about two days to track down each of the 25 bugs. Each debug iteration takes hours, involving re-synthesis of the design, specification of additional signals to be monitored, and remapping into the hardware.

We also wanted to develop a verification methodology applicable to all of our IP development. Our core offerings cover a wide variety of interface protocols, very few of which have industry-standard test suites that are even as comprehensive as USB. In fact, the Host hardware test suite is unique in its extensiveness, but still not sufficient to meet our quality standards. Therefore, relying on hardware testing would require building an in-house team to develop these extensive suites. We believed that using hardware testing exclusively for all cores, and for all configurations of flexible cores such as the USB 2.0 Host, would ultimately take up more resources and find fewer bugs than the combination of hardware testing and constrained-random verification.

We carefully tracked the bugs found during the verification process, and which tool or method found them. We have four primary categories of bugs that we track at Synopsys:

  • B4 – show-stopper bug that could prevent a product from working
  • B3 – significant functional bug that would affect some users
  • B1 and B2 – relatively minor bugs, usually with workarounds

The 25 bugs that we found by running the USB hardware tests fell into the B2 and B3 categories, so a number of important problems were found and fixed during this effort.

Constrained-Random Verification Process

As we developed the constrained-random testbench environment, we divided the project into phases in which we coded a specific USB testbench element based on its functionality and then tested the element on the relevant portion of the USB RTL core. We would drop the RTL into the new testbench and run rudimentary tests. We named this the integration phase, in which we wrung out bugs in new testbench code.

Over time, each new portion of the environment stabilized and we found bugs in the RTL instead. We referred to this as the debug phase. We found that the ratio of bugs found in the testbench environment vs. bugs found in the RTL went from a 10:1 ratio to a 1:10 ratio from early in the integration phase to late in the debug phase.

As we expected, the constrained-random approach required a significant investment of verification engineers to set up the environment and run the tests. Figure 1 shows the ratio of different engineers working on the USB 2.0 project, which expanded to include the USB 2.0 Host and Device cores plus the implementation of On-The-Go (OTG) functionality. The abbreviations refer to engineering roles as follows:

  • RTL – Designers writing or modifying the core RTL
  • HW – Engineers running the FPGA hardware tests
  • DIR – Engineers writing directed tests
  • VIP – Engineers developing testbench models
  • CRV – Engineers setting up the constrained- random environment and running tests

Figure 1: Engineering heads dedicated to all ongoing USB IP development projects

Since Synopsys develops verification IP (VIP) as well as design IP, this project used the DesignWare USB 2.0 VIP products. We estimate that it takes about twice as many verification engineers to create high-quality VIP suitable as products than it does to generate models and monitors strictly for in-house verification of a core. Figure 2 shows our complete USB verification environment, including cores and VIP.

Figure 2: USB 2.0 core and VIP verification environment

With Vera, the same basic test can be run many times in different ways by varying the “seed” that initiates randomness. We created our constrained-random environment and tests so that we could run many seeds for a relatively short amount of time (2-4 hours) as opposed to a few seeds for a long time. Then, when a bug popped up and we found that the default logging of information wasn’t sufficient for debugging, it was relatively easy to increase the report verbosity, rerun the test, and have results the same day. For the USB 2.0 Host core, we ran about 200 seeds per 24 hour day across 40 CPUs.

After some analysis, we discovered that finding and fixing bugs in the testbench environment or in the core RTL took about one-and-a-half days per bug. This gave us a certain amount of predictability because, depending on the complexity of the code we were debugging, we could estimate the amount of time that it would take to fix portions of the design. Ten bugs, for example, would take about 15 days to diagnose and fix.

After the initial ramp-up time, we were gratified that our constrained-random environment found nearly 30 RTL bugs that were missed by both the original directed tests and the hardware testing. 15 of these bugs were classified as B3, 12 as B2 and one as B1. While we did not find any B4 bugs, we were gratified that Vera was so effective at catching the B3 bugs, since many of those could have caused a lot of trouble for customers trying to use the core in diverse applications.

Figure 3 shows the discovery curves for all the B1, B2, and B3 bugs found by Vera on the USB 2.0 Host core. The B2 and B3 curves are fairly typical for constrained-random results, with the most fundamental problems fixed early in the process, followed by elimination of the remaining corner-case bugs. The B1 curve is not typical; the fact that only one B1 bug was found by Vera reflects that fact that the directed tests had done a good job of finding the less serious bugs (while missing many more critical ones). We released the core to customers only after we had completed our detailed test plan and all bug curves had flattened out, and which point we felt confident in our verification.

Figure 3: Bug-discovery curves for Vera running on USB 2.0 Host core

In the end, we were very pleased with the results of our constrained-random methodology. By using a number of Synopsys verification products, including Vera, VCS®, Leda®, and DesignWare VIP for USB and AMBA 2.0, along with Design Compiler® and tools for packaging cores, we found many corner-case bugs that had escaped other verification methodologies. Finding these bugs freed up our engineering resources to work on developing new IP rather than spending time debugging it in the field. Since we introduced the Vera-verified version of the USB 2.0 core, only a few customer problem reports have been logged, and nearly all of these have turned out to be minor (B1 and B2) issues.

Conclusions

With the excellent results we achieved, we’re convincing all parties involved in IP development that Vera is a valuable part of our verification arsenal. In fact, since the USB 2.0 Host project, we’ve deployed the constrained-random approach across the USB 2.0 Device core, the USB OTG Controller Subsystem, our PCI Express family, and Serial ATA cores. We’ve seen similar results, including bug convergence rate and high verification quality, on all these projects.

On newer projects, such as PCI Express, our start-up time has been considerably faster than for our first project. This is partly due to our greater familiarity with the approach and the OpenVera® hardware verification language. It’s also due to a well-established reference verification methodology that Synopsys uses internally and makes available to our customers. Had this methodology been in place when we verified the USB 2.0 Host core, our high-quality results would have been achieved even more rapidly.

Part of our experience with Vera has been learning how to assess verification thoroughness more accurately. In a sense, constrained-random stimulus generation is never really done, since we can always run more tests with more seeds. It was clear from our USB 2.0 Host experience that code coverage alone isn’t a sufficient measure of completeness. We use a combination of code coverage, test plan coverage metrics, and analysis of bug rates to determine when a core is ready for broad release to customers.

Looking forward, we are starting to use some new debug features in VCS that we expect will reduce our average diagnosis time per bug from 1.5 days to one day. We’re also planning to increase our use of assertions, functional coverage metrics, and formal verification in addition to constrained-random stimulus. We’ve seen the value of adopting advanced verification technologies, and we’ve seen that Vera greatly increases quality and reduces the design risk for Synopsys customers using our most complex DesignWare cores.

Acknowledgements

Thank you to Mike Donlin, Tom Borgstrom and Tom Anderson for helping with this article, and to John Coffin for his technical leadership.

Trademarks/Copyright ©2005 Synopsys, Inc. All Rights Reserved.

×
Semiconductor IP