Verification of IP Core Based SoC's

Anil Deshpande, Conexant Systems Inc
Hyderabad, India

Abstract :

With rapid strides in Semiconductor processing technologies, the density of transistors on the die is increasing in line with Moore’s law which in turn is increasing the complexity of the whole SoC design. With manufacturing yield and time-to-market schedules crucial for an SoC(System on Chip), it is important to select verification and analysis solutions that offer the best possible performance, while minimizing iteration time and data volume. With the advent of cutting edge technology applications like set top boxes, HDTV, an increasingly evident need has been that of incorporating the SoC the whole system - on a single silicon i.e., Silicon On Chip (SoC) using standard IP-Cores. In an IP-Core based SoC design. A streamlined verification and analysis flow can contribute significantly to the success of a product. A strategy is devised for a more streamlined approach in IP-core based SoC verification which helps in smooth transition from design to chip tape-out stage.

1. Introduction:

Hardware designs have reached a mammoth scale today, with over ten million transistors integrated on a single chip[1]. This breakthrough in technology has, in fact, reached the point, where it is hard to design a complete system from scratch. Industry has already started designing SoC’s from a large repertoire of Intellectual Property Components or IP Cores sold by many vendors. System-on-chip designs usually involve the integration of heterogeneous components on a standard bus[3,4,8]. These components may require different protocols or have different timing requirements. Moreover, designers often do not have complete knowledge of the implementation details of each component. For example, vendors may want to protect their IP Cores by only providing interface specifications. Consequently, the validation of such designs is becoming more and more challenging. In this paper, we outline a new methodology for formally verifying IP Core based system-on-chip designs. It is well known fact that verification today constitutes about 70% to 80% of the total design effort, thereby, making it the most expensive component in terms of cost and time, in the entire design flow which is expected to get even worse for SoC designs.

1.1. Why IP-Core based SoC Designs are Special

Let us open by defining what an SoC is and is not. A System on Chip (SoC) is an implementation technology[6], not a market segment or application domain. SoC’s may have many shapes and many different variants, but a typical SOC may contain the following components: a processor or processor sub-system, a processor bus, a peripheral bus, a bridge between the two buses, and many peripheral devices such as data transformation engines, data ports (e.g.,UARTs, MACs) and controllers (e.g., DMA) [1]. In many ways, the verification of an SoC is similar to the verification of any ASIC: you need to stimulate it, check that it adheres to the specification and exercise it through a wide set of scenarios.

SoC verification is special and it presents some special challenges:

Integration: The primary focus in SoC verification is on checking the integration between the various components. The underlying assumption is that each component was already checked by itself. This special focus implies a need for special techniques.

Complexity: The combined complexity of the multiple sub-systems can be huge, and there are many seemingly independent activities that need to be closely correlated. As a result, we need a way to define complicated test scenarios as well as measure how well we exercise such scenarios and corner cases.

Reuse of IP blocks: The reuse of many hardware IP blocks in a mix-and-match style suggests reuse of the verification components as well. Many companies treat their verification IP as a valuable asset (sometimes valued even more than the hardware IP). Typically, there are independent groups working on the subsystems, thus both the challenges and the possible benefits of creating reusable verification components are magnified.

HW / SW co verification: The software or firmware running on the processor can be verified only in relation to the hardware. But even more than that, we should consider the software and hardware together as the full Device Under Test (DUT), and check for scenarios that involve the combined state of both hardware and software. Thus we need a way to capture hardware and software dependencies in the tests we write and in the coverage measurements we collect.

All the challenges above indicate the need for rigorous verification of each of the SoC components separately, and for very solid methodologies and tools for the verification of the full system. This requirement for extensive verification indicates the need for a high level of automation , otherwise the task of verification will simply become impractical

2. IP Cores:

By definition, IP cores are pre-designed and pre-verified complex functional blocks. According to their properties, IP cores can be distinguished into three types of cores[11].

Soft-cores: Soft-cores are architectural modules which are synthesizable. They offer the highest degree of modification flexibility. On the other hand, a lot of physical design issues need to be faced before the core can be fabricated. This makes the soft-cores very unpredictable in terms of performance. A synthesizable soft-core consists of a set of technology-independent HDL files, synthesis constraints, test-bench and validation information and adequate information. Firm-cores are encrypted black boxes that are integrated into design flow in the same way as library elements[11].

Firm-cores: Firm-cores are delivered as a mix of RTL code and a technology-dependent net-list, and are synthesized with the rest of ASIC logic. They come ready for routing analysis and do not present significant difficulties for floor-planning, placement, and routing. They have the same routability properties as soft-cores. The performance of the block is still unpredictable[11].

Hard-Cores: Hard-cores are mask and technology-dependent modules that already have physical layout information which give predictable performance. The key deliverable is a fully verified layout in Graphical Design System II (GDSII) format, along with a design for a test structure and test patterns. The drawback of hard-core is that the cores can not be customized for a particular design application. Hard-cores require more model support than firm-cores, which increases development cost. On the other hand, the usage cost is lower because timing validation, test strategies, etc., have already been built into the design. Monolithic hard-cores create a jigsaw puzzle problem for ASIC layouts. When more than one hard-core is used, the ordinary place and route techniques cannot be used due to the existence of a strange, non-rectangular area left for routing other non-core logic[11].

3. A Typical IP-Core based SoC Design:



4. Towards SoC Design Methodology:

Every advancement in microelectronics processing technology is always followed by the development of new design technology. This new design technology, a so-called linchpin technology[12,13], becomes the building block to lead the design entering the next generation of design methodology. The design methodology responds with an adaptation to the new design process resulting in an incremental increase in productivity. It alters the relationship between the designers and the design by introducing a new level of abstraction.

A linchpin technology always comes along with its specific design methodologies.

In general, these design methodologies can be grouped into four groups:

  • Area-Driven Design (ADD)
  • Timing-Driven Design (TDD)
  • Block-Based Design (BBD)
  • Platform-Based Design (PBD)
4.1 Area-Driven Design:

Area-Driven Design (ADD)[11] is the most basic and the simplest methodology used in creating ASIC designs. It is driven to achieve the primary goal target in creating a design which can fit into a limited budget area. The designer is challenged to implement as much functionality as possible in a single piece of silicon. The ADD methodology is used to achieve small sized ASIC’s. Most ADDs are created from scratch and do not offer any design reuse. The main ADD activity is in logic minimization. The synthesis optimization is to produce the smallest design which can meet the intended functionality. In this methodology, no floor planning information is used at the RTL or gate level analysis.

4.2 Timing-Driven Design:

Timing-Driven Design (TDD) [11] is a methodology for optimizing a design in a top down, timing convergent manner. It is driven by the design requirement for meeting performance or power consumption. The methodology is used to achieve a moderately sized complex ASIC design. In general, the complexity of a TDD circuit is between 5000 to 250K gates. It is primarily a custom logic design, offering a very slim possibility of design reuse. The TDD methodology imposes a more floor plan-centric design methodology that supports incremental changes of the design. The floor planning and timing analysis tools can be used to determine the location of placement sensitive areas, allowing the results to be tightly coupled into the design optimization process. TDD relies on three linchpin technologies: interactive Floor-Planning (FP) tools, Static Timing Analysis (STA) tools, and using compilers to move design to higher abstraction with timing predictability. FP tools give accurate estimation on the delay and area early in the design process. They address the timing and area convergence problems which occur in the design process between synthesis and ‘place and route’. STA enables a designer to identify timing problems and perform timing optimizations across the entire ASIC. It reduces the validation stress in catching bugs using a slower timing-accurate gate-level simulation. The advancement in compiler technology enables the designer to move the design into higher abstractions while retaining timing predictability.

4.3 Block-Based Design:

Block-Based Design (BBD) [11] is the design methodology used to produce designs that are reliable, predictable, and can be implemented by top-down partitioning of the design into hierarchical blocks. It introduces the concept of creating a system by integrating blocks of pre-designed system functions into a more complex one. The methodology is used to create medium-sized complex ASIC’s with complexity between 150K to 1.5M gates. BBDs are primarily created as custom logic designs. In comparison to TDD; BBD offers a better chance for reuse, although in reality, very few BBDs are reusable.

4.4 Platform-Based Design:

Platform-Based Design (PBD)[11] is a methodology which is driven to increase productivity and time to market by extensively using design reuse and design hierarchy. It expands the opportunities to speed-up the delivery of derivative products. PBD achieves high productivity through extensive and planned design reuse. Productivity is increased by using predictable, pre-validated blocks that have standardized interfaces. The methodology focuses on better planning for design reuse and less modification on the existing functional blocks. PBD is used to design large sized complex ASIC’s with design complexities greater than 300K gates.

The PBD methodology separates the design into two categories of activity: block authoring and block integration. Block authoring uses a methodology which is suited to the block type such as TDD or BBD. Blocks are created with standardized interfaces so they can be easily integrated with multiple target designs. Block integration focuses on designing and verifying the architecture of the system and the interfaces between the blocks. PBD focuses around a standardized bus architecture and increases its productivity by minimizing the amount of custom interface design or modification of the blocks. The test for the design is incorporated into the standard interfaces to support each block’s specific test methodology. This allows for a hierarchical, heterogeneous test architecture.

5.Trendsin Traditional SOC Verification:

Test plans: Many companies apply the same techniques they used in ASIC verification to SOC verification[4,5,8]. These typically involve writing a detailed test plan, with several hundred directed tests, and describing all sorts of activities and scenarios the designers and architects deem important. While these test plans are important and useful, their effectiveness is limited by two main factors:

  • The complexity of the SOC is such that many important scenarios are never thought of .
  • As complexity of systems grow, it becomes harder to write directed tests that reach the goals.

Test generators: Each directed test hits a required scenario only once, yet there is a real need to exercise those scenarios vigorously in many different combinations[13]. This indicates the need for ways to describe generic tests that can exhaustively exercise areas of interest. Many companies write random tests, but those are usually used only at the end of the verification cycle. While these tests can “spray wildly” and reach unexpected corner cases, they still tend to miss a lot of bugs. The problem is that these tests spray “blindly” in all directions, just like the sun floods the whole earth with its light.

Checking integration: Many SoC verification test benches have no special means for verifying correct integration. Instead, the system is exercised as a whole, as well as possible, under the assumption that any failure can be detected by some false side effect it will create (e.g., one of the data packets passing through a switch will be corrupted)[13]. The main draw back to this approach is that finding the source of the problems by tracing the corrupted data all the way back to where it originated from consumes too much time. This points out the need for integration monitors that could identify integration problems at the source.

HW/SW Co-verification: There are several commercial tools and in-house solutions that enable HW/SW co verification. By running the real software on the simulated hardware, one can debug the hardware and software together before the final production. However, these tools do not typically have the capabilities to look at the hardware and software as one single DUT. They may control the stimuli to the hardware, and may allow modifying software tables or variables, but it is usually impossible to describe scenarios that capture hardware and software dependencies[13]. There is usually no way to describe scenarios such as sending a specific input to the hardware while the software is in a specific state or in a specific interrupt service routine. To exercise an SoC design to its limits, there needs to be a way to capture HW/SW dependencies as part of the test description, the checking rules and the coverage metrics.

6. When we are ready for tape out :

Every design group ultimately needs to answer this question[14]. The means for answering are always insufficient, as verification quality is so hard to measure. Code coverage, toggle or fault coverage and bug rates are all useful measures, but they are very far from complete, and fail to identify many of the complex combined scenarios that need to be exercised in an SoC. To solve this dilemma, there is need for coverage metrics that will measure progress in a more precise way. To summarize, there is always an element of “spray and pray” in verification, hoping you will hit and identify most bugs. In SoC’s, where so many independent components are integrated, the uncertainty in results is even greater. There are new technologies and methodologies available today that offer a more dependable process, with less “praying” and less time that needs to be invested.

7. Looking At The Big Picture:

SOC verification might seem very similar to ASIC verification at first glance, but it is actually special in many aspects. When verifying an SoC, there is a need to look at the full system, and to integrate all details into a full coherent picture. One practical guideline is to take the programmers view of the system, and focus on it. Often the SW interface spec (or programmer’s guide) is the document that best describes the SoC as a whole. Another useful guideline is to define the verification in terms of high-level transactions, preferably look at end-to end transactions.

8. From Unit Level To Full System:

SoC designs are built bottom up, from many units (i.e., hardware blocks) that are assumed to be verified, and are often reused in multiple designs[2,9,11,14]. The fact that the basic units, often IP blocks, might be used in so many different contexts imposes a need to verify these units in any possible scenario that their spec may allow. This can be achieved using spec-based verification, where the rules and properties in the specs are captured in an executable form, allowing tests to span all possible points in the problem-space. This relates to IP verification more than SoC verification, so we will not expand on it here. One of the things that can easily boost the full system verification is using some of the unit verification components. In many cases, it can be very straightforward to use them. Checkers of internal block properties can be integrated into the full verification system.

Another useful technique, which we will just mention briefly, is building verification “shadows” for the actual units. These shadows can really help building the verification “bottom up”. The shadows may be very high level reference models of the blocks. They may be interface compatible but can be very abstract in their internal implementation. These models can be assembled together into a “shadow system” for early prototyping of the SOC, before all actual HW blocks are ready. Later, as the HDL for the blocks become available, they can provide partially shadowed systems in which various sub-sets of the block can be exercised. Even when all blocks are ready and integrated, the shadow models can serve as reference models for the checking of each block.

9 . Creating Reusable And Flexible Test benches :

Separating reusable verification components (such as external interface stimuli, checkers and coverage) is a nice and easy start for collecting reusable verification components. But there is much more to say on verification 4 reuse. The key to having good reusable components is the way we model the verification system. Lets take a look at some of the aspects we need to consider.



Verification environment vs. test scenarios: The verification environment is the infrastructure it should be generic and developed for long-term use. This will make the tests easy to write and easy to maintain[14]. The benefits of this can be sweeping. On the side of “long term,” the verification environment should include a description of all data structures, SoC interfaces, protocol rules and SoC properties. It should include generic test generators, checkers and coverage metrics. As a result, the environment will be self-checking and the effort in creating tests is focused just on describing the scenarios of interest. Even complicated tests can be written in a few lines of code if the infrastructure and primitives are well defined. This approach saves a lot of redundant and repetitive descriptions found in many lower-level test benches, and saves significantly in development and maintenance time. It contributes to verification reuse in several areas:

  • The verification environment is kept generic, its components can be easily ported to work in other SoC designs and the full environment can be easily adapted to modifications in the SoC.

  • The tests are short and descriptive, focus on the scenario described, and can be unaffected by implementation changes in the SOC. In case of changes or modifications of the SOC behavior, tests typically do not need to change, because the changes can be done in the infrastructure of the verification environment.

  • Even the regression suite itself can be reused in different contexts. For example, if the tests are defined as sequences of Ethernet packets, they can be run on two systems that have different Ethernet interfaces .

Reuse between groups: In SoC design more than other design projects, there are many separate groups working on the verification of the units, and possibly separate people work on the full-system verification[13]. It is important that all verification code can be shared between those groups, and especially the group involved with full-system verification. Writing high level, short, and clear code capturing the verification needs is essential for sharing these components. HDL code, especially when written for verification, tends to be hard to maintain and share. A high-level verification language can promote passing code between groups.

10. Conclusions and Future Work:

SoC verification might seem very similar to ASIC verification at first glance, but it is actually special in many aspects. The main focus of SoC verification needs to be on the integration of the many blocks it is composed of. As the use of IP is prevalent in SoC designs, there is need for well defined ways for the IP developer to communicate the integration rules in an executable way, and to help the integrator verify that the IP was incorporated correctly. The complexity introduced by the many hardware blocks and by the software running on the processor points out the need to change some of the traditional verification schemes, and trade them in for more automated verification approaches there by increasing the productivity and simplifying the overall SoC design and verification flow[13]. We are currently deploying this methodology for our new designs and derive benefits from it. There is still challenging task ahead to this approach where we can create a SoC design and verification environment which enables to make designs reusable there by decreasing the overall time to market.

11. Bibliography :

[1] Henry Chang, Larry Cooke, Merrill Hunt, Grant Martin, Andrew McNelly, and Lee Todd. “Surviving the SoC Revolution” , a Guide to Platform-Based Design. Kluwer, 1999.

[2] Pankaj Chauhan, Edmund M. Clarke, Yuan Lu, and Dong Wang. Verifying IP–Core based System–On–Chip Designs. In the IEEE International ASIC/SOC Conference, September 1999.

[3] Hoon Choi, Myung-Kyoon Yim, Jae-Young Lee, Byeong-Whee Yun, and Yun-Tae Lee. Formal Verification of a System-on-a-Chip. 2006

[4] Steve Furber. ARM System Architecture. Addison–Wesley, 1999.

[5] SoC Verification Business Unit Mentor Graphics. Design Challenges Thrust on SoC Process 2004.

[6] IBM and Synopsys. “Design Environment for System On Chip”. White Paper on Synopsys Success Stories.

[7] INTEL. Expanding Moore’s Law, TL 001, 2002.

[8] Jeffrey John Joyce. Multi–Level Verification of Microprocessor–Based Systems. PhD thesis, University of Cambridge, 1990.

[9] Michael Keating and Pierre Bricaud. “Reuse Methodology manual For System–On–a–Chip Designs”. Kluwer, 1999.

[10] Douglas A. Pucknell and Kamran Eshraghian. Basic VLSI Design. Prentice Hall, 1994.

[11] Kong Weio Susanto “A verification Platform for a System on Chip” University of Glasgow UK - 2003.

[12] Jouni Tomberg “System on Chip Design Flow” Tampere University of Technology.

[13] Guy Mosensoson “Practical Approaches to SoC Verification” Verisity Design, Inc.

[14] Daniel Geist, Giora Biran, Tamara Arons, Michael Slavkin, Yvgeny Nustov,Monica Farkas, Karen Holtz “A Methodology For the Verification of a System on Chip” IBM Haifa Research Labs.



×
Semiconductor IP