Why Football Players are Like Verification Engineers

by Charlie Dawson, Cadence Design Systems

We need to form a winning team and it's all about performance. We need to find a group of highly skilled people and equip them to perform with peak productivity, predictability, and quality. It would be great to get a few super stars, but we know that it takes both breadth and depth to win. Therefore, we will have to be strong in each position and we will need an adaptive plan to win.

Are we talking about the pending NFL season or a verification project? It could be either. Just like a winning football team, our verification team has people with different skills and equipment (EDA technology) that must be synchronized to succeed. On the verification team we know that “super stars” can apply to both people and technology, but we also know that we can’t solely rely on the super star performance of point technology. We need to mesh our people and technology with our verification plan to maximize productivity, predictability, and quality.

Assembling the Team

Before we take the field, we need to know how we are going to approach the game. Depending how successful we were in applying the methodology, technology, and experience of our verification team on the previous project will determine the degree of risk and change we are likely to adopt for the new project. Ideally, we would prefer to take the technology elements that worked well for us and simply improve their performance. That would mean no variance in methodology or training. However, experience has shown that “super star” technologies often do have a cost/benefit trade-off, so we must evaluate how each one improves the productivity, predictability, and quality our team needs to win on our next project.

Performance chart showing regions of interest

Many of the verification technologies our team can incorporate are graphed in figure 1. On the horizontal axis we have cycle speed, which measures how fast we run each compute cycle, and on the vertical axis we have cycle efficiency, which measures how many bugs we find with each compute cycle. For example, if we have a technology that runs at 10Mcps (millions of cycles per second) but only finds a few bugs may be graphed with low verification efficiency. If those bugs can only be found by running very long test sequences, it may be the right solution for one of the specialists on our team, but it may not have a broad application.

One way to quantify this graph is to combine the two axes into a measure of verification efficiency (VEFF). A high value of VEFF would indicate a technology that maximizes both cycle efficiency and cycle speed to find bugs faster. In figure 1, the technologies that offer the highest VEFF congregate along the diagonal between the axes indicating that they are broadly applicable in most verification projects. As in football, we do need some “special teams” technologies to go after unique classes of bugs, so the technologies outside of this zone must also be considered for our verification team.

Drafting the Right Technologies for our Winning Team

While it might be hard to picture bugs equating to points in football, to win our verification team must pull all of the bugs it can out of a project just like the football team must pull all of the points it can out of the opposing team. In football we use multiple specialized resources – receivers, quarterbacks, defensive backs, special teams, etc. – to gain points for the football team. Each one has a certain individual point-gaining efficiency and the coach adjusts the game plan to maximize those resources. In verification we rely on specialized technologies and the verification lead must understand VEFF of each one to and adjust the verification plan to maximize the overall bug finding of the team.

Core Technologies

Just like a football team builds itself around a core capability, like a running game, the verification team relies on a core set of technology. Traditionally, those core technologies included gate simulation, RTL simulation, and a waveform database. However, with the growth in randomized testbenches a constraint solver is now part of that core. Regardless of whether the verification team just relies on this core or adds additional technology around it, the performance of the core is critical to the team’s success. And just as a running back that is just a few tenths of a second faster can tip a common set of plays into a winning offense, improvements to these core technologies typically fit known methodologies and enhance the ability to reach verification closure. However, if the verification team needs a break-out technology to complete the project, we will need to look for technologies with significantly higher VEFF.

Gate simulation is the original digital verification technology most teams still use it for a final check because it clearly represents the functionality and timing of the project. With circuits running through a billion gates, the VEFF of gate simulation is quite low. Where gate simulation is part of the hand-off process, performance and capacity improvements can reduce the time to close verification.

Also in the core are RTL simulation and waveform databases. This pair is used in every project and nearly every simulation run. Improvements in these two have been on-going for more than two decades. Most of these improvements have come as the RTL engine optimizes simulation based on coding styles it recognizes and by optimizing the waveform database. The challenge is that the code to execute has grown faster than the performance gains. In response, most verification teams have broken verification into fast running tests that can be run in parallel on a verification (compute) farm. As these verification environments have grown, new languages such as e (IEEE 1647) and SystemVerilog (IEEE 1800) emerged to manage to structure the environment with software-like (as opposed to the hardware-like code of the HDLs). In addition, simulation of power management is growing quickly as project teams wrestle low-power requirements. This translates into the need to simulate these structures natively in the RTL engine so that every test in regression can also be a low-power test. As with design before, the venerable RTL engine will start to recognize the new coding styles and it will continue to increase in performance to serve all projects.

Looking at both RTL and debug suggests performance needs throughout the compile-elaborate-simulation-debug cycle. We described the “special teams” needs of RTL simulation and waveform database performance, but the VEFF in regression increasingly relies on how fast we can find and repair bugs. In the era where many projects depends on executing a massive number of short, parallel regression tests against huge designs, the runtime is often a fraction of the debug loop

As software-like test benches increase in usage, so does the constraint solver. This new engine computes the random stimulus applied through verification methodologies such as the Open Verification Methodology (OVM). Depending on the complexity of the constraints, this engine can become the dominant factor in cycle speed. Verification teams need to add knowledge on how to build efficient constrains even as the verification tool suppliers increase engine speed with new algorithms and improved diagnostics. For verification teams shifting from direct to randomized environments, the constraint solver has become a core capability which must increase in performance for every project.

Enhanced Verification Capabilities

As we observed, even significant performance improvements in the core technologies will not increase their VEFF significantly. Just as football teams turn to “special teams” – groups of players with unique skills – verification teams must turn to newer technologies for higher VEFF. These new technologies must be employed because the solution space for verification is growing exponentially even as the design space is growing geometrically. Core engine improvement is necessary, but not sufficient, to manage the verification challenge.

The most broadly applicable new capability is metric driven verification (MDV). MDV is a combination of technology and methodology that enables the verification team to measure progress against a plan using metrics extracted from both software- and hardware-based verification engines. For example, the verification team may plan to apply formal analysis to the bus, an OVM-based simulation environment to the peripherals, emulation at the system level. Each of these generates coverage data which can be aggregated together to measure how effectively and efficiently the overall verification is executing against the plan. Effective verification measures the progress against the verification goals. Efficient verification measures the impact on the verification plan of each individual test. As a result, MDV executed from a verification plan can dramatically increase cycle efficiency by eliminating redundant tests. Since it automates planning and binds verification tools already in use by verification teams it is the new capability with the biggest potential benefit.

VEFF is a great way to illustrate the impact of MDV. Let’s assume our verification team has access to a small verification farm of 20 CPUs configured as 10 dual-core multi-processor systems. The team has a verification plan which decomposes to 100K coverage bins needed to cover the functionality of our project and will rely on running a set of constrained random tests. To make the math easy, we’ll assume that we have written 20 tests, each of which uses a different random seed, and achieves an average coverage rate of 1 bin/sec for the 600s it takes the core RTL simulator to run the design at a rate of 100 system clock cycles/sec. Given these numbers the VEFF for each MDV regression test is 12K cover bins / regression = 600 sec/test * 1 cover bin/sec * 20 tests/regression.

In this VEFF situation, MDV provides the critical ability to integrate coverage against the plan so we know if the simulations ran long enough to pull out the bugs. While this indicates that merely getting a faster simulator would not necessarily find more bugs (because we would not know the integrated coverage of the tests), it is clear that a faster simulator would directly improve the VEFF number. If we doubled simulation speed, VEFF would double. The trouble is that simulator improvements are typically growing by 10% or 20% per year. What if we do need something bigger like 40%?

One technology that promises a 40% improvement is multi-threading using multiple compute cores. For sure, if we are running long directed tests instead of MDV on a verification farm, this technology would directly benefit VEFF. However, most projects now use verification farms to address the huge verification space.

How can we apply VEFF to determine the most efficient simulation technologies for our simple example? Recall that we built our verification farm using dual-core machines. We still need to fill all of the coverage bins and we still have the same number of tests, it’s just that we can now run each test faster because of the multi-threading simulation. In our calculation, that would mean that we would average 1.4 cover bin/sec (40% increase), but we could only run half of our regressions on the farm because we need 2 cores per test. Updating our calculation, the VEFF for MDV with multi-threaded simulation would be 8.4K cover bins / regression = (600 sec/test * 1.4 cover bin/sec * 20 tests/regression) / 2 [need farm twice].

Interestingly, the multi-core VEFF is 70% less than that of nominal core simulation even if it is faster on a single test. In fact, if we just extracted a 10% gain from conventional simulation, it would directly translate to a 110% VEFF. Moreover, if we replace the simulator with an accelerator operating 20x faster, VEFF would grow 2000%. Of course, there are other considerations regarding acceleration but the key to understand how core and specialized verification technologies interact is to understand the VEFF of the combination.

Straddling the edge of the broadly applicable solutions space of figure 1 is formal (property) analysis. Formal analysis uses assertions – SVA, PSL, OVL – to determine if the DUT meets certain constraints. Modern formal analysis tools leverage the simulation environment where the DUT and assertions are typically first coded to simplify the methodology making formal analysis available to many more engineers. While it can eliminate the need to build a testbench in certain situations, it does have some usage constraints. Formal analysis is more DUT size limited than other verification capabilities and works best in applications like control logic, data transport, and bus interface/protocol compliance but not as well for data transformation and system verification. In summary, formal analysis significantly improves VEFF by eliminating traditional simulation cycles but it’s value is best realized when it is part of the verification plan so that the coverage it generates can be contribute to closing the verification plan.

Figure 1 also identifies several technologies on the high cycle speed side of the graph. The most interesting of these is the “hot-swap” acceleration technology. Traditional transaction-based acceleration, also shown on the graph, uses a hardware-based accelerator to run the DUT orders of magnitude faster than simulation. Prior to “hot-swap”, verification engineers had to choose the engine for a given verification run – software based or hardware based. If an error that needed detailed debug was found millions of cycles into a hardware-based run, the engineer would have to rerun completely on the software-based engine. Depending upon the length of the test, that extra run could add a day or more to the verification project as the rest of the accelerated tests wait for the DUT changes that fix the bug. With hot-swap, the verification engineer can run the DUT on the accelerator up to the point of the error and then move the run into the software simulator to continue through the error situation. This fusion of hardware-based and software-based acceleration puts the capability in the middle of the generally applicable solution zone, though there are some methodology considerations that affect how the verification environment is connected to the DUT. As we illustrated earlier, the impact of accelerate can directly improve VEFF.

In the region of greatest cycle-speed gains are transaction-based verification and emulation. These technologies do require significant methodology investments – SystemC-based transaction-level modeling (TLM) for the former and emulation hardware usage in the latter. However, the cycle-speed gains are in the 1000x to 1,000,000x range,

though the translation to VEFF isn’t direct due to the methodology changes. However, these technologies are applied in SoC verification where embedded software is a key component of the overall functionality. Given the level of specialized knowledge, both transaction-based verification and emulation are outside the area of generally applicable solutions even though they are critical to the success of some verification plans.

Summary

Verification projects, like football games, are won when the team operates at peak performance. In the verification space, we measure bugs found instead of points scored, but we do use a combination of specialized players and equipment to achieve our wins. For sure we always want the greatest performance in each of the specialized parts of our team, but we need to measure how they work together to generate verification efficiency. The greater our verification efficiency, the better we can maximize productivity, predictability, and quality on our way to winning every verification “game” we play.

Charlie Dawson is Senior Engineering Manager at Cadence Design Systems. He has a B.A. degree in Computer Science and Political Science from Boston College. Recently he has been managing a team primarily focused on Verilog performance, VPI, and low power capabilities.

×
Semiconductor IP