Mixed-level modeling allows IC virtual prototypes
Markus Wloka and Guy Shaviv
(12/16/2004 5:46 PM EST)
The continuing advancements in semiconductor technology have led to production flows for 130nm, 90nm and below, enabling 40 million-plus gate chips to be reliably manufactured. This article explores the methodologies and tools required to predictably design and verify the type of combined hardware/software system that can be implemented today on a single chip within a reasonable timeframe.
Many have recognized the need for electronic system-level (ESL) solutions that address the design and verification complexity on a level of abstraction above RTL. However, exactly which key technologies and methodologies make up these ESL solutions is still under debate.
This article looks at strategies and practices to successfully design the products powered by these large and complex system-on-chip (SoC) designs. The discussion centers around four key aspects: specification, architecture design, software, and re-use and intellectual property (IP). The article also looks at the increasingly central role functional verification plays, identifies critical success factors for electronic system-level tools that can help automate the design/verification flow, and describes how these aspects come together in a coherent solution.
System-on-chip design characteristics
Many products that contain complex SoCs have the following characteristics in common:
- The ability to send and receive multiple high-speed data streams through sophisticated protocols. These are often based on competing and sometimes evolving standards for wireless (including 3G), or wired (such as USB2.0, RapidIO and Ethernet) solutions.
- The inclusion of extensive system software. Standard operating systems such as Symbian, Embedded Linux and Windows CE are now common for many wireless devices. Additionally, function-specific applications require large and complex software systems.
- High definition multimedia, both audio, static picture and video, as an integral part of the device's functionality. How this data is encoded is subject to competing standards, for example MP3 and MPEG4.
Competitive advantages for these products often stem from higher capacity, throughput, picture or video quality, and ease of use, which in turn require both standard software and product-specific application software. Power consumption is also a critical factor for most applications, as is product cost, both of which translate ultimately into the silicon requirements for the SoC.
Specifying and architecting SoCs
Translating the high-level product requirements into an architecture that offers the required functionality, performance, and capacity in an optimal solution minimizing power and silicon real estate is the domain of the system architect.
Architectural choices made early in the project about processor or DSP usage, on-chip interconnect solutions, interface controllers, memory design, and so on all greatly affect the success of the chip. The system architect faces the daunting tasks of balancing all of the requirements coming from both the software and hardware sides, deciding what to reuse and what to develop from scratch, and optimizing the business aspects at the same time. Even though static analysis can be used to make some of these choices, the complexity is such that dynamic, simulation-based techniques are required to gain a full understanding of the performance and the requirements for critical shared components.
Performing such simulation requires assembling system-level models of these combined hardware and software systems early in the design cycle. These system-level models need to be able to incorporate existing hardware and software modules, and to allow for additional high-level modules of the newly designed hardware and software to be included. Typical examples of different modules include the following:
Existing Hardware Modules
- Embedded processors
- DSP cores (for audio/video processing)
- Peripheral IP cores (such as USB, RapidIO, and Ethernet)
- On-chip buses (such as AMBA, OCP, and proprietary)
- Audio/video codecs
- Wireless modems
- Audio/video processing
New Hardware Modules
- Processor, DSP, security
- On-chip interconnect
- Additional functionality
Existing Software Modules
- High-level OS (such as Symbian, Linux, and PALM-OS)
- Embedded wireless communications software (such as EDGE, CDMA, WCDMA, GSP, Bluetooth, and 802.11)
- Embedded audio and video
- Multimedia messaging service
- 3D-gaming embedded software and middleware
- Security software
New Software Modules
- Applications
- Power management
- Additional functionality
System-level models capable of incorporating all these modules and efficiently simulating the hardware software together are called virtual prototypes. They function in much the same way as hardware prototypes, but they are available much earlier in the development process, more flexible in terms of changes, and much more economical to replicate across members of project teams.
Virtual prototype requirements
A virtual prototype needs to provide sufficiently accurate answers as to whether the critical design goals are being met. However, the accuracy at which the virtual prototype models the final system does not have to be uniform across the design space.
Typically, certain functionality can be identified as being critical — for example, a critical data path on the chip, a video decoding module, or high-speed interfaces. These portions of the design may be modeled with a high degree of accuracy. In many cases, these critical components are related to shared resources, such as an on-chip-interconnect structure, or a processor/DSP and related caches.
The virtual prototype needs to be complete from a software perspective. Existing software packages must be able to run unmodified. Removing software functionality to create a simpler package is difficult and time-consuming, and may not even be possible if the source code is not available. In addition, running real software allows the completed virtual prototype to be distributed to a large community of software developers many months before the hardware prototype is available.
The virtual prototype needs to be fast, for two important reasons. The first is turn-around time, a measure of the time between starting a test and getting results. For many tests, the virtual prototype needs to go through a reset sequence, often including an operating system boot, so many architects require this "boot time" to be only a few minutes.
The second reason for a high-performing virtual prototype stems from real-world connectivity. Ideally, as with a hardware prototype, real-world data streams can be connected to the virtual prototype. The system architect can then analyze the effects of real-world stimuli.
The architect needs to be able to build such a virtual prototype (and variants) quickly. This phase in an SoC project should be completed within a 3-6 months timeframe, but clearly this is dependent on the characteristics of the specific project. Derivative projects that can reuse a lot from previous chips may be able to achieve this in a shorter time, while designs that incorporate a lot of new elements may take longer.
The deliverables from this architectural exploration phase need to be used in the downstream design flow. This provides reference models for the downstream hardware development and enables concurrent development of embedded software. The latter is becoming increasingly important with the rapidly growing base of software that needs to be validated for the hardware platform and new software that needs to be developed. Without a concurrent process, software development is a gating item for product release.
Mixed-level modeling
All of the above requirements lead to a dynamic, mixed-level modeling methodology, executed in a tool environment in which the system architect can easily trade off accuracy versus speed, depending on the questions to be answered by the tests. The ability to make this trade-off dynamically, during a single test case, is a significant advantage since this can dramatically reduce the boot time.
In such a dynamic, mixed-level modeling environment, the various pieces of the SoC can be modeled at different levels of abstraction and accuracy, depending on the purpose of the tests. For some tests, the architects may want to analyze bus characteristics for certain use cases, for example simultaneous data transfers between a peripheral and a decryption module or maintaining a time-critical video stream from memory towards the display.
For other tests, a memory access profile for a certain application might need to be captured. In the first case, a cycle-accurate (or perhaps cycle-approximate) model of the bus is required, while the second test could have a less-accurate model.
SystemC has proven to be very effective at providing the infrastructure to tie together models on multiple levels of abstraction and has become a de facto standard for architectural modeling and the creation of reference models for hardware design. Simultaneously, high-performance functional modeling is increasingly used to create pure functional models of processors, sub-systems and complete chips. These models can execute unmodified software at speeds of 50 MHz or more, sufficient speed to also provide real-world connectivity.
Mixed-level modeling entails providing these two modeling styles in a single integrated environment. Figure 1 shows an SoC design modeled with functional components, with cycle-accurate transaction-level components, and with two possible mixes.
Figure 1 — An SoC can be modeled at different levels of abstraction.
Dynamic, mixed-level modeling allows components to dynamically switch from one model view to another during a single test. This provides the ability to dynamically bypass certain components (for example the details of bus operation), enabling the test to quickly get to the point of interest.
Mixed-level modeling and IP
To fuel a mixed-level modeling methodology, IP blocks that work in the modeling environment are critical. It is impractical to start design and verification from scratch each time a new SoC has to be architected. The mixed-level modeling methodology should be accompanied by a corresponding IP strategy for the functional blocks and on-chip interconnects.
The availability of implementation IP (cores) for processors, DSPs, and interface IP (for example USB or RapidIO) determines to a large degree the building blocks from which to construct the SoC. These IP blocks, proven and optimized for implementation in RTL, gates or layout form, need to be accompanied by more abstract models so that architects can effectively explore the trade-offs between the various IP blocks early in the development cycle. Such an abstract model consists of a functional view, and possibly one or more transaction-level interfaces.
For processors this is already an established practice; cycle-accurate processor models and instruction-set models are typically provided by the processor IP vendors. Debuggers can be attached to these processor models and software can be executed. In today's era of platform-based design methodologies, functional models of complete subsystems, such as Texas Instruments' OMAP Platform, are also available.
High-level functional models of the peripheral blocks and other key components that constitute an SoC also need to be readily available. This allows system architects to quickly assemble complete system models. Only if all functionality of the SoC is available can complete operating systems and applications run unmodified on these virtual prototypes.
Interconnect IP to connect the models is also critical. The on-chip infrastructure is a critical shared resource on the chip that must be designed carefully. Interconnect IP standard interfaces that match the interfaces of the functional blocks are a key enabler for a mixed-level modeling methodology. Fast, high-level models that can capture the key characteristics, throughput, and latency help drive the choices for the final implementations.
Mixed-level modeling and verification
The mixed-level modeling process supporting the architectural exploration phase has two main deliverables in the overall design flow: the reference models for the downstream RTL design phase and the functional platforms for software development. As the design implementation proceeds, RTL models become available as well.
A single unified verification environment is essential to ensure functional consistency between the reference models, the software platform, and the RTL. This environment must provide a testbench infrastructure that enables the SoC team to develop a set of test cases that can be applied to any (mixed-level) representation of the design. This set of tests defines a regression suite that serves as the "golden reference" to ensure the consistency between the different design abstractions through the development cycle.
The requirements for such a testbench are quite challenging. First, it needs to efficiently create a variety of scenarios to drive the architecture models. It needs to interact with the software running on the design, for example, to control when a particular interrupt is generated.
The testbench needs to be able to handle the signal-level details required to verify the final RTL model of the SoC. Finally, it needs to be able to incorporate real-world devices.
Figure 2 — A unified testbench keeps the hardware and software views of the design in sync.
Figure 2 shows a mixed-level model with a testbench that can run software, create stimulus and check responses, and interact with real-world data. This approach supports a verification methodology much broader than the traditional focus on verifying only RTL.
Mixed-level modeling and software development
Virtual prototypes offer distinct advantages for software development — simulation performance and early availability — that are a direct result of the modeling abstraction level. Simulation performance is crucial to run substantial amounts of software such as a modern operating system or a multimedia decoder. Early availability is important for starting the software development early enough in the project timeline, since the software development tasks often are on the critical path for product release.
A key requirement for software development is that the virtual prototypes interface and work with common software debuggers. Software developers have different debugger preferences, so a mixed-level verification platform has to provide interfaces to multiple software debuggers, and these interfaces have to operate together with the debuggers in the hardware and verification domains.
Using virtual prototypes instead of actual hardware prototypes has additional advantages:
- 1. They are easy to update and distribute. Especially with geographically dispersed design teams, this is a substantial benefit.
2. Because they actually may run faster than FPGA-based hardware based prototypes and emulation, more work can be done in less time.
3. They provide better debugger capabilities because of increased visibility into both the hardware and software.
Virtual prototypes are instrumental in moving the software development and debugging tasks earlier in the overall project schedule and help to substantially reduce the overall development cycle.
Mixed-level modeling in practice
Through a close collaboration between Synopsys and Virtio, we have created an integrated tool and IP environment that enables mixed-level modeling and the development of virtual prototypes that tie into Synopsys' unified verification environment.
As an example, we have implemented a mixed-level model for an LSI Logic CoreWare platform. This platform is based on an ARM926EJ-S core, which was modeled with an instruction-accurate, instruction-set simulator (ISS) from Virtio. All the peripherals were implemented with Virtio's functional modeling technology.
We inserted a timed, cycle-accurate SystemC model of the ARM AHB bus between the ISS and the peripherals. The ISS is represented to the bus as a master component and all the functional peripheral models are represented to the bus as slave components. We chose to connect the AHB bus model only on the data bus, while the instruction bus remained in the high-level functional model. This was a requirement to minimize the impact of bus transactions on overall performance.
We implemented a switch that allows us to disengage the SystemC AHB bus model dynamically for high-performance tasks such as booting an operating system, and to re-engage the SystemC bus model for performing specific tests after the operating system is running. To ensure predictable transitions between functional behavior and cycle-accurate behavior we allow this switch to happen only at times when the state of both models is known, such as when the bus is idle.
The resulting system was able to boot Linux in 22 seconds when the SystemC AHB model was disengaged. The same task took 21 minutes with the AHB model engaged. All tests were run on a 2.0-GHz x86 Linux machine. The Linux image we used executes 450 million instructions for the boot process, which translates to 20 MIPS for the functional model, and to 357 KIPS with the timed, cycle-accurate AHB model.
Even though these performance numbers are preliminary and our integration is still subject to further optimizations, the figures do validate the performance differences discussed earlier. The results also highlight the need for advanced mixed-level modeling technology, specifically to verify the hardware model in the presence of any substantial amount of software.
Conclusion
ESL is about methodology, tools and IP that help designers architect their increasingly complex SoCs that often contain multiple processors, DSPs, and complex interface logic.
This article has presented a dynamic mixed-level modeling methodology that allows designers to build virtual prototypes by using a mix of functional models and detailed timed models. By selectively extending the functional-modeling approach with interfaces that can be at the transaction level, or even the register-transfer level, this solution allows designers to minimize the impact on the overall simulation speed and ensure sufficient performance to execute the software development tasks.
A key enabler for this mixed-level modeling methodology is the ability to ensure the consistency between the functional models used for software development and the final hardware (RTL) models. This consistency can be achieved with today's verification technology using testbenches that drive the models and check for corresponding behavior.
Fueling this design and verification flow requires a rich library of IP to allow system architects to run operating systems and applications much earlier in the development process. This enables analysis of the mutual dependencies between hardware and software early in the project, allowing fine-tuning of architecture to meet performance and power requirements.
Virtual prototypes based on mixed-level modeling technology combined with an advanced verification methodology enables a predictable, concurrent hardware and software development flow that substantially reduces the development times for today's complex SoCs.
Markus Wloka is director of R&D for the Synopsys Verification Group in Herzogenrath, Germany. Wloka joined Synopsys in 1996, where he contributed to the COSSAP and System Studio system-level design tool. Prior to Synopsys, Wloka joined Motorola SPS in Tempe, Arizona in 1991, where he worked on the Entice standard cell characterization software, with a focus on distributed Spice and power characterization.
Guy Shaviv, vice president of engineering at Virtio Corp., has over 15 years of hands-on experience in software development and in successfully bringing software applications to market. His background includes positions for Interval Research, NASA Ames Research Center, SORBA Inc., and the IAF.
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved.
Privacy Statement | Your California Privacy Rights | Terms of Service
Related Semiconductor IP
- JESD204D Transmitter and Receiver IP
- 100G UDP IP Stack
- Frequency Synthesizer
- Temperature Sensor IP
- LVDS Driver/Buffer
Related White Papers
- Virtual prototypes simplify real-time embedded system power modeling
- Virtual prototyping speeds mixed-signal IC design
- How virtual prototypes speed SoC hardware design
- Advancing Transaction Level Modeling -- Linking the OSCI and OCP-IP Worlds at Transaction Level
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference