Advanced Techniques for Building Robust Testbenches with DesignWare Verification IP and Reference Verification Methodology (RVM)

Charles Li, Corporate Applications
Ashesh Doshi, Product Marketing
Synopsys

Introduction

Today’s consumers have come to expect more functionality from their electronic devices, provided in a smaller form factor. This has fueled the demand for integrated devices like smart phones, multi-purpose set-top boxes and portable gaming devices. To meet this challenge, designers of System-on-Chip (SoC) devices are progressively moving to smaller process geometries while integrating more features on the SoC. It is now common to see over 20 million gates in an SoC. These devices are typically powered by a large core processor from ARM or MIPS and utilize a tiered bus architecture standard such as AMBA™ for on-chip data processing and PCI Express® for off-chip data processing. These complex SoCs come with sizable on-chip memory and can interact with over a half dozen standard bus interfaces.

To build such an SoC and still meet time-to-market demands, designers are using off-the-shelf Intellectual Property (IP) for many of these standard bus interfaces. The most common bus interfaces include PCI Express, USB, Ethernet, Serial ATA and AMBA. The net effect is that verification, not design, is becoming the biggest challenge to a timely delivery of an SoC.

Synopsys® DesignWare® Verification IP (VIP) can help address these time-to-market challenges by quickly, thoroughly, and systematically verifying the bus interfaces used by the SoC at the block and chip level. The availability of this library of Verification IP from a single vendor significantly reduces the overall verification time. To further reduce verification time, the combined use of DesignWare VIP with an established, robust verification methodology developed to address today’s complex verification needs, offers significant benefits.

The Value of an Advanced Verification Methodology

Traditionally, designers have used directed testing to meet their verification objectives, but this methodology is running out of steam. Using this traditional methodology, it is taking too long to adequately verify all of the possible scenarios that a typical SoC presents. Designers are therefore turning to advanced verification methodology standards built around techniques such as constrained random verification and functional coverage. The Synopsys Reference Verification Methodology (RVM) is one such approach that is industry proven.

This paper discusses advanced verification techniques using DesignWare VIP and RVM. It is second in a series and builds on the first paper: “Five Vital Steps to a Robust Testbench with DesignWare Verification IP and Reference Verification Methodology (RVM)”. Readers unfamiliar with constrained random verification are encouraged to read the first paper.

In this second paper, we briefly discuss the benefit of using constrained random verification and offer a recap of the first paper. The primary focus and majority of the discussion in this paper is on using advanced techniques with DesignWare VIP and RVM for building a robust constrained random testbench. The techniques that will be discussed are:

  • Constraints
  • Factories
  • Callbacks
  • Coverage
  • Scenario generation

Benefits of Constrained Random Verification

For small designs, Design teams have traditionally used directed testing to verify their designs. Directed testing is effective in covering known test points. Each test point is a specific functional condition that is identified in the test plan. Individual tests can then be written to ensure a given test point has been tested. This method is effective when the state space is small and well understood.

Typically, with a complex SoC, the state space is huge. Many I/O and embedded blocks interact with each other extensively, as large amounts of data are transferred and processed in parallel. The permutations and combinations of these interactions create an essentially infinite number of test points for the Design team to consider. Exercising an adequate subset of these test points with directed tests is proving to be nearly impossible, even without considering time-to-market pressures. Some SoC designers are reverting to costly prototyping tests or, in the face of the risks and possible costs, delivering their SoCs to market without being thoroughly verified and tested.

Many Design teams are finding that writing a purely random testbench can help provide better test coverage of their design. One can quickly test many test points with this approach. However, randomly generated stimulus can be unfocused, unrealistic and sometimes plain nonsense in the context of the protocol at hand. Also, it is unlikely that purely random tests will verify interactions between the various complex blocks where most of the costly design errors can be hiding. Pure random testing, therefore, achieves broad but shallow coverage of the state space.

Constrained random verification combines the best of directed and random testing. With this method, constraints are written to focus random stimulus generation on specific areas of functionality, such as the interactions between complex blocks. The benefits of constrained random verification are:

  • Improved Coverage: Specifying constraints allows designers to narrow the scope and focus random testing in areas that require coverage. This results in more meaningful tests that go deeper into the state space and offer better functional coverage.
  • Automatic creation of complex tests: Constraints serve as partial specification of tests. This gives the Design team’s testbench automation tool the freedom to create tests that are valid but may not have been conceived by the designer because they were too complex. Using constraints as a guide, the testbench automation tools can generate input stimuli that capture complex test patterns that are time-consuming to create manually.
  • Testbench reuse: Constrained random verification increases reuse by separating test-specific code from reusable infrastructure code. Typically, unit configuration and constraints will vary between testbenches. The testbench infrastructure, however, remains the same from test to test. The code that specifies each test is both minimized and localized. New tests are developed by simply changing constraints, saving valuable verification time.

Review of the “Five Vital Steps”

The whitepaper "Five Vital Steps to a Robust Testbench with DesignWare Verification IP and Reference Verification Methodology", provides an introduction to the use of DesignWare VIP with RVM. It introduces major RVM principles and explains the fundamentals of how to build a constrained random testbench infrastructure. It also shows how to create a constrained random testbench that exercises a wide range of test conditions while remaining compact and reusable.

To summarize, the five vital steps presented are:

  1. Create a test environment
  2. Configure the models
  3. Connect and use the channel interfaces
  4. Generate constrained random stimulus
  5. Control the test

These steps yield a basic constrained random testbench.

The Next Steps

As with most new technologies, DesignWare VIP used with RVM presents new concepts and techniques. The key to getting maximum benefit from something new is understanding how to apply the new ideas to solve the challenges that are faced. So it is with using DesignWare VIP and RVM. An understanding of the methodology allows the end consumer to efficiently apply it to specific verification requirements. Furthermore, RVM is architected to allow maximum flexibility so the techniques can be modified, combined, and customized in numerous ways.

This paper continues on from describing a basic constrained random testbench and shows more techniques that can be applied with DesignWare VIP and RVM to build highly effective testbenches. The topics represent a layer of methodology that most testbenches will use at some point. Building on the first paper, five additional major topics will be discussed here in more detail, with a natural pairing occurring with four of them. The topics are:

  • Constraints and Factories: These two topics relate to the use of data objects to represent protocol activity. The paper shows how to define customized objects and how to constrain them to suit the needs of the testbench. Constraints are a primary way of defining test conditions and replace writing test conditions manually in a directed test environment. They are a key element of the technique in Step 4 of the Five Vital Steps paper.
  • Coverage and Callback Methods: Functional coverage is a complementary technology to constrained random verification. Coverage provides metrics that show what test conditions have been exercised by a testbench or test suite. Since stimulus generation is random, it is important to be able to tell which conditions were created and which were not. This paper will describe how DesignWare VIP supports functional coverage and how a user-defined coverage model can be created. Callback methods are a transactor mechanism that provides user access to data objects at various points in the normal 'flow' of the object.
Callbacks are the primary means of gaining access to transaction objects for both scoreboarding and functional coverage. They are described together with functional coverage. 
  • Scenario generation: Efficient generation of stimulus is a key benefit of constrained random verification. But if the stimulus does not represent the conditions to be tested, then its value is diminished. Scenario generation ensures that the constrained random tests faithfully replicate operating requirements.

It is fairly easy to generate a stream of discrete transaction objects one at a time, and this can be used by most protocols to test the cases when transactions are independent of each other. Often, however, protocols define sequences of activity where the individual transactions are related in some way. It is important, therefore, to generate sequences of transactions with the ability to define relationships between objects. In combination with the data objects defined by DesignWare VIP, the scenario generator provided with RVM does just that. Further, constraints allow the sequences to be randomly generated and can even generate sequences of sequences (the possibilities are endless). Details about this subject are provided later in this paper.

Factories

In an object-oriented approach to verification, protocol activity is represented by objects and their attributes (members), which reflect the characteristics of the activity. For example, an object that represents a PCIE TLP transaction would have members to define requestor ID, payload size, routing, and so on. When using DesignWare VIP, classes for these protocol actions are defined and provided. The definitions of these transaction objects and the channels that handle them form the interface between testbench and VIP. To create traffic, one simply generates an object of the appropriate type and calls the put_t() task of the corresponding input channel.

A transaction class is a self-contained unit listing the definitions of all members and the constraints that affect them. Together, they form a template for constrained random generation because these definitions are the ‘rules’ that the constraint solver must follow. They determine what will be randomized and within what limits. And, like in a production line, a generator can use this template to easily pump out streams of (randomized) transaction objects. An instance of a class that is used as a template is referred to as a factory object, or simply factory for short.

The beauty of this approach is that, other than references to the class name, the generator code does not contain anything specific to the factory object. This makes the generator generic so it can be reused in other applications. Simply provide another factory object, and the generator will produce objects with the new template. The generator architecture follows a factory pattern, using assembly line methodology for maximum efficiency. The Five Vital Steps paper shows how the rvm_env class serves as a template for a user’s implementation of the verification environment through the use of virtual methods. In a similar fashion, the underlying generator code is extended to use any factory object.

The transaction classes are defined by DesignWare VIP, and inheritance allows the user to extend the classes. Most of the time, extension is done in order to add constraints, which will be discussed next. However, a class can be extended to add members to a factory. This is useful for adding test-specific or user-defined attributes to the base class. For example, a string member can be added to tag transactions with a proprietary label to help with tracking, as shown next:

// Extend transaction class to create new factory class with user members
class user_transaction extends dw_vip_pcie_tlp_transaction {
string user_label; // String to hold proprietary ID label

task new(string user_label = "Unmarked" ) {

super.new();
this.user_label = user_label;
}
}

When a transactor creates an object to be output on an activity or output channel, the allocate() method is used to ensure that the resulting object is of the extended type and not of the base type. Note that, for this type of object, the extended members will only be initialized since the VIP does not process the functionality of the extra members. Handling any added members must be provided by the testbench.

Constraints

Constraints allow a test to set the scope for the randomization of objects so that the desired test conditions are generated. Without any constraints at all, most random objects would be nonsensical in the context of a protocol, which is not very useful for testing.

Randomization is sometimes misunderstood and seen as a process whereby the simulation engine takes the control of class members away from the user. In fact, the opposite is true. Randomization is really an additional way for the user to assign class members and there are several ways to control the process.

  • Randomization only occurs when an object's randomize() method is called and it is completely up to the test code when, or even if, this occurs.
  • Constraints form a rule set to follow when randomization is performed so the testbench has influence over the outcome by controlling constraints. In fact, direct control can be exerted by constraining a member to a single value. Constraints can also be enabled and disabled.
  • Each rand member has a rand mode that can be turned ON or OFF, giving individual control of what will be randomized.
  • Assign a member to a value at any time. Randomization does not affect the other methods of assigning class members.

These techniques can all be employed when working with randomization.

The data objects used by DesignWare VIP models already have constraints defined. These constraints can themselves be quite complex, but they are the key to easily generating objects that are protocol compliant. Since an initial set of constraints is provided, the user need only make incremental modifications or additions to create the desired test conditions. The next section describes what is provided and how to add user-defined constraints.

Predefined Constraints

Each data object provided by DesignWare VIP has a constraint called valid_ranges. This is the broadest constraint possible, and is intended to keep class members within ranges that will function with the particular VIP model. These ranges may not align with protocol limits, so objects generated with only this constraint may not make sense. Because the model cannot operate outside of the limits set by valid_ranges, this constraint should never be disabled.

To begin narrowing the scope of operation, each data object has one or more 'reasonable' constraints, which are also provided with the VIP. Typically, each of these relates to a single data member so there may be several within a class. The constraint names begin with "reasonable_". This set of constraints serves two purposes. First, they enforce protocol limits so that generated objects are compliant. Then, in some cases, they also restrict the range of members in order to produce 'reasonable' simulations. For example, some protocols can have huge data payloads which take a long time to simulate. The payload size may be limited to produce simulations of manageable length. This sort of limit reflects a somewhat arbitrary choice and does not exercise the entire protocol range. It is intended that the user reviews the reasonable constraints and decides which ones to supplement and which ones to turn off for a particular test. The reasonable constraints are on by default. Below are two examples of reasonable constraints, the first from a transaction object and the second from a configuration object.

class dw_vip_pcie_tlp_transaction extends dw_vip_data
{
...
// Sample reasonable_ constraint from a PCIE TLP transaction object
// Constrain the transaction length based on the type
constraint reasonable_bvLength {
if (m_enType == MEM_RD_32 || m_enType == MEM_RD_LK_32 ||
m_enType == MEM_RD_64 || m_enType == MEM_RD_LK_64 ||
m_enType == MEM_WR_32 || m_enType == MEM_WR_64 ||
m_enType == CPL_D || m_enType == CPL_D_LK ||
m_enType == MSG_D && m_enMsgCode != SET_SLOT_POWER_LIMIT)
{
m_bvLength >= 0;
m_bvLength <= 1023;
}
else if (m_enType == IO_RD || m_enType == CFG_RD_0 || m_enType == CFG_RD_1 ||
m_enType == IO_WR || m_enType == CFG_WR_0 || m_enType == CFG_WR_1 ||
m_enType == MSG_D && m_enMsgCode == SET_SLOT_POWER_LIMIT)
{
m_bvLength == 1;
}
else if (m_enType == MSG || m_enType == CPL || m_enType == CPL_LK)
{
m_bvLength == 0;
}
}
...
}
class dw_vip_pcie_configuration extends dw_vip_data
{
...
// Sample reasonable_ constraint from a PCIE configuration object
// Constrain the process rate to be in a set of values
constraint reasonable_nRxTlpProcessRate
{
m_nRxTlpProcessRate in { DW_VIP_PCIE_IMMEDIATE, 16 : 1023 };
}
...
}

User-Defined Constraints

A user can define specific constraints for a data object. Because the data objects have constraints already defined, those constraints must be taken into account when writing one’s own to avoid conflicts. When a user constraint defines a subset of the pre-defined range, this is fine and there is no conflict. The constraint solver will satisfy both rules by choosing from the smaller set. If there is a conflict and the desired range of values is not attainable, then the reasonable constraint should be either disabled or extended (replacing the original definition). The user should keep in mind that some of the pre-defined constraints are enforcing protocol limits. One may need to retain some of the original code when extending a constraint.

There are two common ways of setting constraints. The first is to extend the class and write new constraint blocks in the derived class. Constraints follow the rules of inheritance, so one can add new constraints or redefine existing ones, just as one does when extending methods. The example below shows how to extend constraints.

// Class provided by DesignWare VIP with predefined constraints
class dw_vip_usb_transaction extends dw_vip_data
{
...
// Provided reasonable constraint
constraint reasonable_nInterTransactionIdle {
m_nInterTransactionIdle in {0 : 160};
}
...
}
// This class is shown just to complete the hierarchy tree
class dw_vip_usb_hs_transaction extends dw_vip_usb_transaction
{
...
}
// User derived class defines new constraints to use by default
class user_usb_hs_transaction extends dw_vip_usb_hs_transaction
{
...
// Extend (replace) original constraint with user-defined
// Instances of this class will use this new definition
constraint reasonable_nInterTransactionIdle {
m_nInterTransactionIdle in {16 : 1024};
}
// Add user-defined conditions
constraint small_packets {
// The device address will be controlled by a testbench configuration object
m_bvDeviceAddr == tb_cfg.dev_addr;
// Keep the payload small for OUT transactions (smaller than protocol limits)
m_enKind == dw_vip_usb_transaction::OUT => m_ovPkts[1].m_nNumDataBytes <= 32;
}
...
}

This method creates a new class which is then used instead of the base class. Although formally declaring a new class incurs some overhead, this is the preferred way to add constraints if the extended set of constraints will be reused. For example, the extended type might be needed in several different testbenches or one may be developing a hierarchy of data objects.

If reuse is not an issue, the randomize() with{} construct is quite handy for adding constraints for randomizing an object. The with{} construct modifies the standard randomize method by applying any constraints contained in the with{} clause. These additions only affect the current invocation of randomize(). Compared with deriving a new class, randomize() with{} is compact and adds no overhead, but it does not support reuse and the factory pattern of generation. The generation code contains test-specific information so it must be rewritten for each new test. This example shows how the ‘small_packets’ constraint in the previous example could be implemented using randomize() with{}:

...

dw_vip_usb_hs_transaction randomized_hs_tr;

randomized_hs_tr = new();


while (gen_cnt < tb_cfg.test_len) {

// Generate transactions of different types aimed at the same device
// randomize() with{} is used to apply constraints for this call of randomize()
// Low initial overhead, but now the generator code is not generic
status = randomized_hs_tr.randomize() with {
// Device address is determined be testbench config object
m_bvDeviceAddr == tb_cfg.dev_addr;
// Keep OUT transactions small
m_enKind == dw_vip_usb_transaction::OUT => m_ovPkts[1].m_nNumDataBytes <= 32;
};
... // rest of generator loop
}
...

DesignWare VIP uses objects with constraints for transactions, configurations, and exceptions. For more information on the general subject of constraints, refer to the OpenVera Language Reference Manual: Testbench.

Callbacks

Callbacks are an important part of the RVM and DesignWare VIP architecture which can be used for several applications. At their root, callbacks are an access mechanism. Among other uses, they enable the insertion of user-defined code and allow access to objects for scoreboarding and functional coverage. The workings of the mechanism are described next. Then we will show how callbacks are used for functional coverage.

Callbacks are implemented using callback methods. Each DesignWare VIP model includes a class that contains a set of callback methods for that model. These methods are called as part of the normal flow of procedural code. There are a few differences between callbacks and other methods which set them apart.

  • Callbacks are virtual methods with no code initially so they do not provide any functionality unless they are extended. The exception to this rule is that some of the callback methods for functional coverage already contain a default implementation of a coverage model.
  • The callback class is accessible to DesignWare VIP users so the class can be extended and user code inserted.
  • Callbacks are called within the sequential flow at places where external access would be useful. In addition, the arguments to the methods include references to relevant data objects. For example, just before a transactor puts a transaction object into an output channel is a good place to sample for functional coverage since the object reflects the activity that just happened on the pins. A callback at this point with an argument referencing the transaction object allows this exact scenario.
  • If the callbacks are not extended, there is no need to invoke the callback methods. To avoid a loss of performance, callbacks are not executed by default. In order to use them, they must be registered using the register_callback() method of the transactor.

DesignWare VIP uses callbacks in four main applications.

  • Access for functional coverage
  • Access for scoreboarding
  • Insertion of user-defined code
  • Message processing

To support these functions, there are callback methods designed for each purpose. Here is a representative list of callback methods.

class dw_vip_pcie_txrx_rvm_callbacks extends rvm_xactor_callbacks {

// Channel callbacks allow user-defined code at channel events

virtual task post_tlp_channel_get( ... );
virtual task pre_tlp_channel_put( ... );
virtual task post_dllp_channel_get( ... );
virtual task pre_dllp_channel_put( ... );
virtual task post_os_channel_get( ... );
virtual task pre_os_channel_put( ... );

// Dataflow callbacks allow user-defined code at interfaces between functional layers

virtual task tx_tlp_trans_layer_dataflow( ... );
virtual task tx_tlp_link_layer_dataflow( ... );
virtual task tx_tlp_phy_layer_dataflow( ... );
virtual task rx_tlp_phy_layer_dataflow( ... );
virtual task rx_tlp_link_layer_dataflow( ... );
virtual task rx_tlp_trans_layer_dataflow( ... );
virtual task tx_dllp_link_layer_dataflow( ... );
virtual task tx_dllp_phy_layer_dataflow( ... );
virtual task rx_dllp_phy_layer_dataflow( ... );
virtual task rx_dllp_link_layer_dataflow( ... );

// Coverage callbacks allow collection at channel events

virtual task tlp_input_channel_cov( ... );
virtual task tlp_output_channel_cov( ... );
virtual task dllp_input_channel_cov( ... );
virtual task dllp_output_channel_cov( ... );
virtual task os_input_channel_cov( ... );
virtual task os_output_channel_cov( ... );
}

class dw_vip_transactor_rvm_callbacks extends rvm_xactor_callbacks {

// Message callbacks allow custom message processing just before each message is issued

virtual task pre_notify_send_msg( ... );
}

Because functional coverage has a close relationship with constrained random verification, the use of callbacks for functional coverage is described next. The other uses for callbacks are outside the scope of this paper.

Coverage

Coverage helps determine how well constrained random tests are exercising the desired test conditions. This is a critical metric since stimulus is randomly generated and the state space coverage is, intentionally, not pre-determined. To support functional coverage, callback methods are provided in three callback classes offering a choice of coverage model from full-custom to fully implemented. Following the object-oriented paradigm, these three classes are related in an inheritance hierarchy. The base class provides the most basic support with each derived class adding more functionality. This figure shows the hierarchy.


The base class (<model_name>_callback_class) provides basic access to the data objects so that the user can develop a custom coverage technique. Recalling the discussion of callback methods, the methods in the base class include an argument that is a reference to a transaction object. All coverage callback methods represent a point in the functional flow of the transaction where coverage collection is likely. The methods have no bodies so the user extends the class and fills in the methods. The code sample below shows core techniques for implementing a coverage model. The main elements are:

  • Add an event to the class to be used as a sample event
  • Trigger the event from the callback method
  • Define a coverage group using the event and the data object from the method’s argument list.
// Example of a user-defined coverage model
class user_axi_monitor_rvm_callbacks extends dw_vip_axi_monitor_rvm_callbacks {

// Add member to hold transaction object from callback
dw_vip_axi_monitor_transaction tr;

// Add an event to be used as a sample event
event cover_it;

// Define a coverage group using the sample event and the data object
// from the callback method’s argument list (via the local object)
coverage_group MyCovGroup {
sample_event = sync (ALL, cover_it);
sample tr.m_bvAddr {
state Range0 (32'h0000_0000 : 32'h0000_ffff);
state Range1 (32'h0001_0000 : 32'h00ff_ffff);
state Range2 (32'h0100_0000 : 32'h01ff_ffff);
state Range3 (32'h0200_0000 : 32'h0fff_ffff);
state Range4 (32'h1000_0000 : 32'hffff_ffff);
}
sample tr.m_enXactBurst;
sample tr.m_enXactDir;
sample tr.m_enXactLength {
state valid_length (dw_vip_axi_transaction::LENGTH_1,
dw_vip_axi_transaction::LENGTH_2,
dw_vip_axi_transaction::LENGTH_4,
dw_vip_axi_transaction::LENGTH_8,
dw_vip_axi_transaction::LENGTH_16);
}

cross Dir_Burst (tr.m_enXactDir, tr.m_enXactBurst);
cross Dir_Length (tr.m_enXactDir, tr.m_enXactLength);
cross Burst_Length (tr.m_enXactBurst, tr.m_enXactLength);
}

task new() {
MyCovGroup = new;
}

// Extend coverage method to assign sample object and trigger event
task activity_channel_cov (dw_vip_axi_monitor_rvm oRvmModel,
dw_vip_axi_monitor_transaction oVipXact ) {
cast_assign (this.tr, oVipXact); // Provide object for sampling
// Trigger the event from the callback method
trigger (ON, cover_it); // Provide sample event
}

}

Because there are callback methods for different purposes, there may be more than one callback at the same point in an object's functional flow. When this occurs, DesignWare VIP ensures that the coverage callback is the last callback to be executed. This is because the other callbacks may change the transaction object and coverage should measure what will be passed to downstream modules. In particular, the code in a non-coverage callback could cause the transaction to be dropped. The coverage callback must know this since, typically, the object would not be sampled.

The next level of coverage support is provided by a class (<model_name>cov_data_callback_class) derived from the base class above. This derived class implements the coverage-related methods that provide significant events and associated transaction objects to be used as sample events and samples for coverage. There is an event that corresponds with the points when each coverage callback method is invoked and the data objects come from the methods’ argument lists. So the sample object and an event marking that the object is ready to be sampled are available. With the events and objects already defined, the user extends this class to define custom coverage groups and bins.

The third level of support for coverage is provided by a class (<model_name>cov_callback_class) derived from the partial implementation above. This class has a coverage model fully implemented including coverage groups and bins.

The choice of which callback class to use is a matter of how much (or how little) of the coverage model the user wishes to write. It is common to have a specific list of coverage groups and bins that are matched to the verification strategy. In this case, the user will likely want to write a specific coverage model. The supplied implementation offers general purpose coverage metrics with no development time. If this suits the verification objectives, it is a great option. The three coverage callback classes provide a choice of support from full custom to fully implemented.

A general description of coverage sampling, groups, and bins is outside the scope of this paper. The coverage techniques use standard OpenVera constructs. For more information please refer to the OpenVera Language Reference Manual: Testbench.

Scenario Generation

RVM provides two types of random generators: atomic and scenario. A generator must be declared to handle a specific data type (the factory instance). To define the generator class using a proper factory pattern, the rvm_atomic_gen and rvm_scenario_gen macros (provided by RVM) should be used. These macros are described in detail in the Reference Verification Methodology User Guide. These generator macros accept an argument to define the class to be used as the factory object and then create the generator code. The following is a typical use of the scenario generator macro:

// Macro to create scenario generator
// This macro will create the following classes:
// dw_vip_axi_master_transaction_scenario
// dw_vip_axi_master_transaction_scenario_gen
// dw_vip_axi_master_transaction_scenario_gen_callbacks
// dw_vip_axi_master_transaction_scenario_election
// Note: dw_vip_axi_master_transaction_channel is defined by VIP
rvm_scenario_gen (dw_vip_axi_master_transaction, "AXI Master Gen")

Atomic generation refers to randomizing one transaction object at a time to produce a sequence of unrelated transactions. Each object is unrelated to the others so each is an atomic unit. This form of generation is simple, and yet it can be very effective for some tests. For example, it is easy to create a test which exercises operations individually by generating all transaction types between all source and destination points. For other tests, however, this will not produce the conditions needed. To create tests for anything other than simple combinations or sequences requires more than atomic generation can offer. For these more advanced applications, scenario generation is available.

A scenario is essentially a sequence of transactions represented as an array of transaction objects. Constraints define the rules governing the sequence. When the array of transactions is randomized, an entire sequence is generated. In addition to this fundamental capability, an RVM scenario can represent a set of sequences. The code sample below shows that different kinds of scenarios can be specified and each assigned a unique ID. By writing constraints that define sequences based on the data member kind, randomizing kind is akin to choosing a sequence type. In this way, one scenario object can generate multiple sequence types. If desired, kind can be constrained with a distribution to control the probability of each type occurring.

// Number of transactions
#define SCENARIO_LENGTH 10

// Scenario class definition – one class can represent multiple physical scenarios

// through application of constraints
//////////////////////////////////////////////////////////////////////
// AHB Transaction Scenario. Consists of 5 writes followed by 5 reads
// - 0: WR Random data @ (0x00 + offset)
// - 1: WR Random data @ (0x04 + offset)
// - 2: WR Random data @ (0x08 + offset)
// - 3: WR Random data @ (0x0c + offset)
// - 4: WR Random data @ (0x10 + offset)
// - 5: RD data @ (0x00 + offset)
// - 6: RD data @ (0x04 + offset)
// - 7: RD data @ (0x08 + offset)
// - 8: RD data @ (0x0c + offset)
// - 9: RD data @ (0x10 + offset)
//////////////////////////////////////////////////////////////////////

class user_scenario extends dw_vip_ahb_master_transaction_scenario {

integer low_addrs; // unique id for scenario that uses low addresses
integer high_addrs; // unique id for scenario that uses high addresses
// Define constraints that are common to both scenario kinds
constraint common {
length == SCENARIO_LENGTH;
repeated == 0;
foreach (items, i) {
items[i].m_enLock == VMT_BOOLEAN_FALSE;
items[i].m_enXferSize == dw_vip_ahb_transaction::XFER_SIZE_32BIT;
if (i <= (length/2 - 1))
items[i].m_enXactType == dw_vip_ahb_transaction::WRITE;
else
items[i].m_enXactType == dw_vip_ahb_transaction::READ;
items[i].m_enBurstType == dw_vip_ahb_transaction::SINGLE;
}
}
// Use constraints to generate multiple types of sequences
// Constrain to use either low or high address range based on kind
constraint address_ranges {
kind == this.low_addrs => {
foreach (items, i) {
items[i].m_bvAddress == ((i * 4) % 20) + 0;
}
}
kind == this.high_addrs => {
foreach (items, i) {
items[i].m_bvAddress == ((i * 4) % 20) + 20;
}
}
}
task new () {
super.new();
// define_scenario returns a unique id number which can be used in
// writing constraints. Also, kind is automatically constrained to
// be in the set of defined scenario id's.
this.low_addrs = define_scenario ("Master Scenario-low addresses", SCENARIO_LENGTH);
this.high_addrs = define_scenario ("Master Scenario-high addresses",SCENARIO_LENGTH);
}
}

// Using a scenario generator
class ahb_env extends rvm_env {
...
// Instantiate scenario and generator in the env
dw_vip_ahb_master_transaction_scenario_gen scen_gen;
user_scenario my_scen;
...
}
task ahb_env::build() {
....
// Construct the objects
scen_gen = new ( ...);
my_scen = new();
// Associate the derived scenario object with the generator
scen_gen.scenario_set.push_back (my_scen);
}
task ahb_env::start_t() {
...
// Start it up!
this.scen_gen.start_xactor();
...
}

There are many ways that a scenario can be used. One simple example is to produce a read-modify-write sequence. This is created by a scenario with two transactions. The first transaction is constrained to be a read. The second transaction is constrained to be a write that uses the same addresses as the read. The data for the write is calculated by applying the modify operation on the data that was returned in the read. In this simple case, the addresses and data will be randomly generated, but the sequence will still be a read-modify-write.

As described above, scenarios can be used to generate specific, predefined sequences. They can also represent systemic conditions, such as a burst of traffic on an Ethernet bus, large data transfers over USB, or a 'dirty line' resulting in a high error rate. Really, there are an endless number of uses for scenarios.

In addition to handling individual scenarios, the RVM scenario generator can also operate on an array of scenarios. This enables the generation of sequences of scenarios, and since a single scenario can itself represent multiple sequences, very complex conditions can be produced. In protocol terms, the user can randomly switch between logical sequences or conditions creating traffic that is both complex and realistic. One might end up with a read-modify-write followed by a large amount of memory access, followed by a period of light traffic, followed by another read-modify-write, and so on.

As may have been deduced, the key to effective scenarios is constraints. Within a given scenario, the constraints define what the scenario represents. Between scenarios, a distribution constraint determines the likeliness that each scenario will occur. Well-designed constraints will create more accurate scenarios. For more information about constraints, please refer to the OpenVera Language Reference Manual: Testbench.

Conclusion

As with most new technologies, DesignWare VIP used with RVM presents new concepts and techniques. When applied effectively, the new practices will provide the maximum benefit from a constrained random verification methodology:

  • Save verification time and effort
  • Increase test effectiveness and coverage
  • Increase reuse

The “Five Vital Steps to a Robust Testbench with DesignWare Verification IP and Reference Verification Methodology (RVM)” whitepaper provides an introduction to using DesignWare VIP with RVM. It lays a solid foundation for creating advanced testbenches.

This paper builds upon that foundation and describes additional proven techniques for creating highly effective testbenches. This paper presents topics that are likely to be used by most testbenches. Samples of the techniques, as well as the underlying concepts, are presented. The paper shows several ways to use DesignWare VIP with RVM technology and provides the knowledge to customize, modify, and extend the techniques to suit the needs of SoC designers.

References

1. “Five Vital Steps to a Robust Testbench with DesignWare Verification IP and Reference Verification Methodology (RVM)”

2. “OpenVera Language Reference Manual: Testbench”

3. “Reference Verification Methodology User Guide”

For more information on Synopsys DesignWare IP, visit www.synopsys.com/designware.

×
Semiconductor IP