SoC Test and Verification -> Leveraging memory for better fault tolerance

Memory subsystems called 'next challenge' for system designers

EETimes

Memory subsystems called 'next challenge' for system designers
By Mark Gogolewski, Chief Operating Officer, Vice President of Engineering, Denali Software Inc., Palo Alto, Calif., EE Times
January 15, 2002 (12:37 p.m. EST)
URL: http://www.eetimes.com/story/OEG20011213S0027

Networking has benefited from advances in system-on-chip design, notably in terms of specialized intellectual property including network processors and on-chip buses. Now, the challenge is to keep processors fed with data. The bandwidth bottleneck has shifted to memory subsystems, a trend that can be seen across various end markets, especially in networking, where speed is a differentiating factor.

Memory vendors are addressing this problem by developing new architectures focused on high-bandwidth networking and communications applications. The rapid fragmentation of the DRAM market is largely due to the emergence of networking and consumer applications as the key technology driver. Though computers have been the largest consumer of DRAMs since the early 1980s, communications applications are creating the need for new kinds of DRAMs that emphasize raw bandwidth and low latency. Among them are novel architectures such as double-data-rate (DDR and D DR2) SDRAM, Rambus DRAM, reduced-latency DRAM and fast-cycle RAM.

New memory technology doesn't come without a price. In addition to solving the system-level bandwidth bottleneck, designers must also deal with the increasingly diverse protocols and high speeds of these memories. They need new tools and methodologies to solve the unique issues associated with modern memory subsystem design.

While advances in synthesis and physical design have made 10 million-gate system-chips a recent reality, verification technology is still playing catch-up to these monster designs. Engineering teams are struggling with verification efforts that now regularly exceed 70 percent of the total design cycle. The complex memory subsystems required for high-bandwidth SoC designs compounds the effort, especially in networking applications. Ensuring that data packets arrive and are disassembled correctly, and that they can be assembled and transmitted correctly, is the key to verifying these systems.

Data structures sto red in memory get spread out across several physical devices, making it difficult to analyze and verify the transactions. Ideally, designers would perform the verification at a higher level of abstraction and analyze collections of data packets. However, managing these complex data structures across interleaved arrays of physical memory is a daunting task, even in the most advanced verification environments.

Another challenge in verifying these complex systems lies within the memory components themselves. The main workhorse of functional verification is hardware-description language-based simulation. Good simulation models for memory components are essential for system-level verification.

In the past, it was common to hand-code models or obtain a vendor-supplied model written in Verilog or VHDL. However, the diversity and complexity of modern memory architectures makes it impractical to develop and verify memory models in-house. And, most memory vendors are no longer able to support models for the w ide range of EDA tools in use today.

However, certain approaches can leverage the complexity of the memory subsystem to increase the efficiency of system-level verification, actually simplifying the verification task. It helps to consider that the number of internal system states stored in memory can be orders-of-magnitude greater that the number of states that are observable at the system's pin-level boundary. Accessing memory increases the observability of the system, and enables designers to catch bugs as they happen in memory instead of thousands of cycles later when they propagate to the system boundary.

These techniques are especially applicable to networking and communications designs, where much of the verification centers on validating the transfer of structured data — especially linked lists — in and out of memory.

This process begins by exposing memory contents to the system-level testbench for analysis during simulation. Once access to the memory data is established, the d ata stored in physical components can easily map to a contiguous system-level memory space. This requires simple constructs for width, depth and interleaving expansions among the various physical components. Being able to access memory data at a system-level abstraction, as opposed to the physical-memory abstraction, is key to performing more complex system-level verification tasks.

After raising the abstraction level for the memory subsystem, the next step is to raise the level of abstraction of the data stored in the memory subsystem. For networking applications, the appropriate data abstraction might be ATM cells, Ethernet frames or linked lists. In the case of an ATM cell, data might be stored in a 64-byte buffer. For example, it might be stored across four 8-bit-wide physical memories to form a 32-bit interface to memory. Mapping these data structures makes it possible to view, manipulate and verify at a system-level abstraction during simulation.

Now that system-level data abstractions are exp osed to the verification environment, any number of verification tasks can be performed to verify the integrity of the system-level data. Placing system-level "assertions" on data and data transactions makes it easy to catch bugs associated with parity violations, invalid data, and out-of-bounds or out-of-order memory accesses.

General concepts of this data-driven verification approach are somewhat obvious, but may seem impractical in many cases. It is true that data-driven verification may not be a valuable solution if the interfaces to the memories required rewriting every time the memory configuration was changed. It would also not be nearly as valuable if it required customization whenever a new memory vendor was used. The same is true for developing the mechanism for controlling accesses to memories from the top-level testbench.

Commercial solutions are available that make these tasks easier. Memory-simulation products enable designers to quickly simulate all types of memory and provide a cons istent error-checking and reporting mechanism. The interface for accessing memory data usually includes utility functions for loading, saving and comparing memory contents without using the simulator to move data through the memory pins.

Also, a number of testbench-automation tools use powerful verification languages to simplify the manipulation of memory models and process memory data during simulation. Combining standard memory-manipulation routines with the consistent interface in commercial memory-simulation products makes it possible for designers to create highly reusable verification objects that can be leveraged across various design efforts.

Copyright © 2003 CMP Media, LLC | Privacy Statement
×
Semiconductor IP