Maximizing Verification Productivity: eVC Reuse Methodology (eRM)

by Andrey SHVARTZ, Verisity Design Paris, France

Abstract
Verification managers engaged in complex IC development projects are under constant pressure to reduce the time and cost of verification, yet they lack the necessary resources. This manuscript articulates how reusable verification components, and an underlying verification reuse methodology are essential elements to verification productivity gains. The issues that must be dealt with are delineated and the requirements of a complete verification reuse methodology are outlined. Verisity’s e Reuse Methodology (eRM™) is used to exemplify many of the key concepts.

Introduction
Verification has emerged as the single biggest bottleneck in complex ASIC, ASSP, and SoC design projects [1]. Recent studies such as those from Collett International have consistently reported that verification consumes 60-70 percent of project effort for complex IC designs. SoCs in particular present a massive verification challenge. Not only do they include multiple functional blocks, they often have multiple operating modes. Achieving satisfactory verification requires that the cross product of all functional block interactions and all operating modes be tested. Clearly, verification productivity must be improved. Along with other Electronic Design Automation vendors, Verisity has stepped up its efforts to bring new verification solutions. Verisity's focus on verification has yielded significant advances in verification methodology. Verisity's Specman Elite functional verification solution provides powerful engines to generate stimuli and manage functional coverage. Tightly linked with Specman Elite is the e verification language, in which verification environments are created. To increase verification leverage, Verisity pioneered the concept of reusable verification components [2]. Written in e, they are known as eVCs™ (e Verification Components).

e Verification Components (eVCs)
An eVC™ is an e Verification Component. It is a ready-to-use, configurable verification environment, typically focusing on a specific protocol or architecture (such as Ethernet, AHB, Bluetooth, or USB).

eVCs are reusable, pre-verified, configurable, Plug and Play verification components. “Using the eVC shortened a one year development project by 6 weeks (i.e., over 10%). Even if we’d had those 6 weeks, without Specman Elite, we still would not have covered all possible behaviors.”, said Laurent Savelli, Verification Manager, STMicroelectronics, France. As eVCs are pre-verified, this reduces verification risk and improves design quality. And since eVCs are designed as general-purpose components, they can identify all corner cases, including cases the hardware designer might not have considered.

To better understand what makes a verification component reusable or not, let’s first examine verification components in more detail. A complete verification component handles all the facets involved in verifying a given protocol, interface or processor within the device under test (DUT). This minimally includes the following items (see Figure 1):

  • Input traffic generator to create stimulus for the DUT (e.g. packets/frames, bus transactions, etc.)
  • Bus functional models (BFMs) to drive that traffic, communicating directly with the DUT
  • Monitors, scoreboards, and protocol checkers to examine the actual response of the DUT relative to the expected response
  • Functional coverage to measure and report on whether the transactions and scenarios defined in the test plan have been covered or not


Figure 1: Typical verification component.

You can apply the eVC to your device under test (DUT) to verify your implementation of the eVC protocol or architecture. eVCs expedite creation of a more efficient test bench for your DUT. They can work with both Verilog and VHDL devices and with all HDL simulators that are supported by Specman Elite™.

For example, bus-oriented verification components such as AMBA AHB or USB include agents (masters, slaves) and bus logic (address decoder, arbiter) as well as other modules. These verification components instantiate a virtual bus environment surrounding the DUT in order to, create realistic traffic scenarios, check for protocol adherence, and measure functional coverage of the scenarios on the bus.

Verification components are inherently reusable since they are encapsulated and are typically targeted at a standard specification. They can be reused when moving from module-level to chip-level to system-level verification as well as when moving from project to project.

Verification Reuse Needs an Underlying Methodology
To explain why a complete verification reuse methodology is required, let’s consider some of the common difficulties encountered when using a verification component. First, it is easy to see that with all the functionality packed inside a verification component, it must be highly configurable to allow each user to adapt it to their specific target DUT environment. In addition, components must lend themselves to being easily controlled during simulation. Some specific issues exemplifying these aspects of verification components include:

  • How many agents (and of what types) should be instantiated?
  • What kind of traffic should they generate?
  • What type of traffic should they avoid generating?
  • How should transactions be synchronized with other events?
  • How can the user specify traffic scenarios of interest?

Second, most systems today incorporate multiple protocols and interfaces, each of which requires its own respective verification component. When multiple components co-exist within a single testbench, several more challenges are introduced:

  • Naming and timing interference between the various verification components must be avoided
  • Different test writing approaches must be unified
  • System-level scenarios must be synchronized

Without a standard in place, each verification component developer resorts to developing his or her own home-grown methods and guidelines. Not only does this waste valuable time and make the component more difficult to use, it does not accomplish the goal of ensuring interoperability between components that are created by different developers.

By developing all the components within the structure of a single, consistent methodology, they all have the same look and feel. As a result, after mastering one such verification component, a user will find all the others familiar and easy to use, shaving substantial time from the verification schedule and improving quality. A reuse methodology maximizes the productivity of a verification component’s user as well as that of the component’s developer. By not re-inventing the wheel for the elements common to all verification components, the developer can focus on implementing the unique, value-added parts of their component.

We have identified three main requirements to maximize reusability:

  1. Avoiding interference between verification components (referred to as coexistence)
  2. Ensuring common look and feel allows easy configuration, control and test writing (referred to as commonality)
  3. Combining multiple components for synchronized operation (referred to as cooperation)

The reusability benefit of any given component is directly proportional to how easy it is for the user to achieve the above. The next three sections of this paper describe these verification reuse methodology requirements in more detail.

Avoiding Interference between Verification Components (Coexistence)
It is often necessary to combine multiple components from different sources within in a single testbench. In these cases it is extremely important to avoid interference between components. Such interference may result from issues such as name space collisions (e.g. class/object names, type names etc.), and/or dependency on global settings (e.g. environment variables, testbench global settings etc.).

A reuse methodology must define all the necessary rules such that each verification component is encapsulated, self-contained and unique. This eliminates the potential for unanticipated interactions with other components.

Ensuring Common Look and Feel between Verification Components (Commonality)
When multiple verification components from different sources are used, learning to use each one is often a time consuming process. Different packaging and directory structure, configuration approaches for components, documentation standards, as well as different approaches to generating test scenarios all significantly increase the time required to set up the verification component in the user’s environment. Other issues, such as inconsistent mechanisms to control the runtime behavior of the component, trace (debug) messages formats, and error messages (e.g. when a protocol violation is detected), will increase the time spent debugging and fine-tuning the testbench. Multiple variations between components undoubtedly impacts the component user’s productivity, and in turn, increases the support load on the component’s developer. This impedes the developer’s ability to deliver the component to additional users or to develop new components.

By defining standards for all the usercontrollable aspects, a complete reuse methodology ensures that components have the same look and feel, thereby significantly reducing the learning curve to master each new component. Furthermore, a single standard mechanism can be applied to control all the components consistently (e.g. turning trace messages on or off) rather than using a different mechanism for each component, as individually defined by the component’s developer.

Facilitating Verification Components Interoperation (Cooperation)
Ensuring that multiple verification components co-exist in the same testbench is necessary, but is not sufficient in achieving verification goals. To stress the system and reach corner cases, the user will need to coordinate generation and control of multiple sequences of operations performed by the various verification components, as well as the rest of the testbench. Furthermore, the user will need to coordinate simultaneous operation of multiple components on different system interfaces to drive synchronized system-level scenarios (see Figure 2). By defining a common mechanism to express such sequences of operations, and the ability to easily mix sequences for multiple components, a reuse methodology makes test writing much more powerful and straightforward for the user.


Figure 2: Synchronizing input generation between multiple verification components is a key methodology requirement.

Controlling input generation is perhaps the most important aspect in a multi-component environment, but not the only one. Another interesting example is the coordination of end-oftest. With multiple verification components simultaneously generating and driving inputs on several interfaces, a single component cannot just stop the simulation when it has finished. A more centralized approach is required to allow all the instantiated verification components to report that they are done, and only then stop the simulation.

A complete reuse methodology will define all standard mechanisms will be used by every verification component. This saves the user significant time in determining how to achieve the task for each individual verification component.

Knowledge Transfer Essential to Reuse Methodology Success
E
ducation is an absolutely critical element to the acceptance and success of any methodology. It is essential for a methodology to be accompanied by training courses, complete coding examples (“Golden” verification components), and comprehensive documentation so that everyone involved in the verification process employs the same techniques and standards. Another essential deliverable are templates for verification components’ user guides and training classes. These also ensure that users are presented with information in a consistent fashion.

Compliance Checking
To provide peace of mind to both the component developer and user, tools to validate a given component’s adherence to the methodology are needed. They enable developers to be certain they have complied with the rules and provide high confidence of the component’s usability and interoperability. Similarly, component users need access to the compliance testing results. They need to be able to verify for themselves that the developer has indeed lived up to the standards.

eRM: The e Reuse Methodology
Verisity Design’s e Reuse Methodology, or eRM for short, was designed to meet all the methodological requirements articulated above. Drawing from over 350 man-years of the collected experience of Verisity’s customers, partners, consulting engineers and internal eVC developers, eRM delivers the best known methods for architecting, coding and packaging e-based verification components (eVCs). eVCs are verification components written in the e verification language. The e language’s power was designed specifically for verification. Reuse and extensibility are fundamental e language design principles. The e Reuse Methodology (eRM) codifies the best practice for developing e-based verification components and verification environments (see Figure 3).


Figure3: eRM compliant eVCs are complete verification environments that embody the full Specman Elite methodology.

eRM fully meets the verification reuse methodological requirements as well as the knowledge transfer and compliance checking requirements. Here are below key methodological requirements along with a description of how each is addressed by eRM.

Methodology Requirement
Avoid interference between verification components (coexistence).

eRM Functionality
eRM defines naming conventions rules, independence of timing settings and global settings, and much more.

Methodology Requirement
Ensure common look and feel to allow easy configuration and control

eRM Functionality
eRM defines eVC architecture and user interface standards including:

  • Common directory structure
  • Common packaging and installation mechanisms
  • Common initialization, RTL connections, tracing/debugging and error reporting, and documentation standards.


Methodology Requirement
Synchronize multiple components operations (cooperation).

eRM Functionality
eRM defines several programming interfaces. In particular, it defines a sequence construct, allowing an eVC user to easily control sequences of transactions generated by the eVC. Each eVC developer can provide a library of sequences with his eVC. Each sequence type in the library can generate multiple transactions, with some predefined structure making that sequence interesting. For example, a developer of an eVC that generates bursts of bus transactions can define a sequence that generates a stream consisting only of long bursts all of which have a common attribute such as a given address range. A user can then customize library sequences by controlling parameters built into them. Furthermore, the user can construct more intricate sequences by combining several basic ones from one or more eVCs (see example in Figure 4). This enables more interesting scenarios to be exercised. For example, traffic on multiple interfaces of the DUT can be synchronized with each sequence originating from a different eVC.

eRM also extends Verisity’s functional verification tool, Specman Elite™, in several ways. One of the most important additions is a new construct called Sequences. Sequences enable eRM compliant eVCs to generate and synchronize complex multi-transaction scenarios. Rather than generate each item automatically, test developers can now easily generate scenarios of multiple transactions and control them over time (see Figure 4).


Figure 4: Example of a multi-eVC input scenario constructed using eRM’s Sequence feature.

To achieve the highest degree of knowledge transfer possible, Verisity provides customers and eVC developers with an eRM Training course, three “Golden eVCs” (ideal coding examples), and extensive documentation. Verisity also provides developers templates for eVC user guides and an eVC training classes. These templates ensure that eVC users are presented with information about all eVCs in a consistent fashion, independent of the eVC’s developer.

To ensure that eVCs are truly eRM compliant Verisity provides an eRM checklist. The checklist is organized in topical sections so it can be easily parceled out to multiple developers. Each check is defined as required or optional and a report format is provided so users or customers can consistently view and compare eVCs. The checklist results are a required item to be delivered with the eVC to be eRM compliant, and Verisity also posts the reports on the eVC Store web site.

Summary
Verisity, along with other electronic design automation vendors, has stepped up its efforts to bring new verification solutions to increase verification productivity. "Verification reuse is extremely important to ST, particularly for the ST Bus since it is a standard protocol used throughout ST. With Specman Elite we created a plug-and-play verification component -- the ST Bus eVC-- which we have already delivered to several projects, enabling them to verify conformance to this standard.", said Jean-Marc Chateau Director of the Design, Consumer and Microcontroller Groups for ST. Verisity’s exclusive focus on verification has already yielded significant advances. For example, over the past two years Verisity pioneered the concept of reusable verification components (eVCs) and has invested heavily in a verification reuse methodology (eRM). Over that same time the number of verification components available has grown dramatically—as of August 2002 over 70 eVCs had been created.

This rapid growth in the number of eVCs is testament to the value of verification components and their acceptance, but it has also brought about the need for verification reuse standards. A verification reuse methodology maximizes reusability and interoperability and ensures a consistent user experience. This manuscript has described the nature of verification components and the required characteristics of a verification reuse methodology. It has also provided a brief introduction to Verisity’s eRM, the e Reuse Methodology, and explained how eRM fully meets the criteria of a verification reuse methodology.

References

[1] D. Geist, G. Biran, T. Arons, M. Slavkin, Y. Nustov, M. Farkas, K. Holtz, A. Long, D. King, S. Barret, “A Methodology For the Verification of a ‘System on Chip”, DAC, 1999.

[2] “Functional Verification Automation for IP, Bridging the Gap Between IP Developers and IP Integrators”, http://www.verisity.com/html/technical_papers.htm

×
Semiconductor IP