Embedded Systems: Programmable Logic -> Focus is on the best ASIC design flow

EETimes

Focus is on the best ASIC design flow
By Robert Kaye, Market Development Manager, John Wilson, Business Development Manager, System-on-Chip Verification Business Unit, Mentor Graphics Corp., San Jose, Calif., EE Times
February 16, 2001 (1:15 p.m. EST)
URL: http://www.eetimes.com/story/OEG20010216S0032

System-on-programmable-chip technology has characteristics of both board-based design and ASIC-based system-on-a-chip ASIC design. The immediate attraction of SoPC is that, like a breadboard, the design can be up and running very quickly-a debugger connection can be made and the design can be executing code in a few minutes. All this, with the added advantage of being able to reprogram the SoPC if design errors are encountered makes it seem to be the ultimate breadboard technology.

However, many attributes of the ASIC design process remain, so it's worthwhile looking at which design techniques and tools are relevant for SoPC use. Can the fast prototyping it offers replace existing SoC ASIC design flows, or are they complementary technologies that combine to offer the greatest value to the designer?

The new generations of programmable logic devices bring them to the levels of complexity previously encountered in SoC ASIC design. The introduction of processor cores into the SoPC also introduces the need to consider software and hardware design and the software/hardware interface as integral parts of the design process adding further complexity to the design process. Methodologies to handle this complexity have evolved in the SoC arena and now become important to the SoPC designer.

SoC design can, in broad terms, be described as a process of successive refinement. SoC designs start life as a collection of behavioral HDL or C models describing the functionality of the system. At this level, designers investigate the characteristics of the design-with abstract high-performance simulation models prior to committing to a full RTL description.

As each block or function is finalized, the design evolves through a process where each behavioral model is replaced with its RTL equivalent. Only when the complete design is described in a synthesizable RTL netlist can the mapping process onto the SoPC begin.

For complex designs such as those targeted for SoPC-class programmable logic, reaching 100 percent RTL can be a lengthy process, potentially taking as long as several months. In embedded systems, code development and debug is as much as 75 percent to 80 percent of the total design activity. Waiting until the hardware design is complete before running and verifying code against the hardware makes hardware and software debug serial activities. Serialization negates one of the key advantages of SoPC-it is no use making a rapid prototype if you then spend an equal or greater amount of time verifying the prototype.

Using HDL simulators and hardware/software co-verification tools allows code to be executed, debugged and checked for correct interaction with the hardware while the detailed design is still under development. SoC designers who use co-verification have repeatedly reported savings in desig n schedules on the order of months through handling software and hardware development in parallel, reducing integration times after prototypes are available and eliminating the risk of finding errors after fabrication.

Just as significant, but perhaps less intuitive, is that parallel development raises the quality of the final design. Software and hardware teams work together during development, using a single reference model of the design. The verification tools uncover problems in the design. These could be actual bugs or interface problems, but often they are inefficiencies in the design, so that while the design may be functional it is not optimal. At this stage of the design both hardware and software are fluid and the changes can be easily made in either or both and rapidly simulated for verification. Being able to refine both hardware and software leads to higher-quality designs.

Verification should be early and often and the design cycle one of successive refinement ending with a netl ist that can be targeted to the SoPC platform.

A similar "incomplete RTL" argument may also be relevant if the design includes commercial IP. Full access to the synthesizable models of IP may not be made available until the usual commercial license deals are signed. During the design exploration phase, it is often not appropriate to sign these agreements until it has been verified that the desired IP is functional within the design.

For protection, IP companies will offer no-source models already compiled for use only with an HDL simulator. This compels the designer to simulate and complete initial hardware/software co-verification using the traditional software design and verification tools.

One other restraint on SoPC fast prototyping is the ability to build the prototype quickly. If the complete design can be fitted into a single device, this may be a very feasible approach. However, many designs may require additional support circuitry (often analog/RF) to complete the design fun ctionality and allow appropriate connections to be made to facilitate testing of the device.

The usefulness of fast prototyping in these cases depends upon factors such as the time taken to fabricate the rest of the design-as opposed to the SoPC itself-and the ease with which suitable test patterns can be applied to the design. This further blurs the distinctions between SoC and SoPC and drives the need for SoC-style design flows into programmable design.

Modern 32-bit processors present very complex bus protocols, interrupt handling and other software/hardware interfaces to the designer. This creates some interesting challenges to the end user. One aspect is the development and verification of peripherals to conform to the protocols. The development of efficient, low-level code relies on these hardware/software interactions; developed in isolation, code often can be functional on a prototype but is not operating optimally.

How can you visualize both the hardware and software operation in a single flexible environment that allows fine grain control on the operation and complete debug facilities? One answer is co-verification, and this blurs the boundary at which "verification" ends and "design" starts.

For example, Mentor Graphics recently encountered a case where a software exception handler was being entered more than once for each exception. Tested standalone, both the hardware and software were functional-the exception handler correctly completed and returned. In the test cases applied, the hardware was functional. However, the multiple entry-and execution-meant that more cycles were consumed than required and could have led to problems if multiple exceptions occurred. We were able to track down and rectify the cause of this glitch by having visibility in both hardware (through the logic simulator) and software (in the debugger).

Coordination hurdles
Co-verification tools offer significant advantages in the verification and analysis of hardware and software interactions over both standalone simulation techniques and running code on the physical system. The value of those techniques is greatest on the low-level code where the interactions are most complex, but carries through to observing operation of the higher-level applications when they are hardware-critical.

The value statement of co-verification tools has primarily been focused on verification of the hardware/software interface prior to fabrication. So it may be a surprise to some to discover that significant use of the tools occurs after fabrication, in the lab, during prototype debug.

In this phase of the design, strange or anomalous behavior not observed during simulation can be encountered. One of the problems in embedding complex processors into an SoC or system-on-programmable-chip is that it then becomes difficult to observe the internal circuit conditions-often on the processor periphery-that lead to the behavior, especially if embedded trace facilities that allow the software path to be followed are limited.

In fact, the number of debug visibility features available may depend on the monitor facilities built into the operating system running on the prototype and the amount of observability logic built into the hardware design. Close control and coordination between the hardware and software execution, especially of the operating system, in a multithreaded, real-time operating system can be very difficult.

With the ability to match the lab design stimulus and follow the code execution in the hardware/software co-verification tool, the designer can reproduce the problem and easily trace the cause of the observed behavior in a way that is just not possible on the prototype. Often, many different scenarios have to be investigated before the root cause of the problem can be diagnosed with certainty. The designer can extract pro totype waveform traces from the prototype design with a logic analyzer, which allows the traces to be manipulated into a format suitable for the HDL simulator connected to the hardware/software co-verification tools.

There is no doubt that the complex 32-bit SoPC devices offer significant new opportunities for SoC designers who want to create fast, customizable prototypes and low-volume applications. As with all designs, they will be deployed most quickly and reliably when the potential design errors are removed as early as possible in the design process.

Hardware/software co-verification and other ASIC verification techniques will be essential weapons in the armory of designers , and will enable full realization of the benefits of SoPC. The collaboration of EDA vendors and SoPC suppliers will serve to put to work the productivity and ease of use that SoPC dictates.

×
Semiconductor IP