Improving Software Development and Verification Productivity Using Intellectual Property (IP) Based System Prototyping
By Frank Schirrmeister, Synopsys
Abstract :
Once a chip development project has started, project managers are asked almost immediately to provide early representations of the chip development for various purposes, such as:
- Marketing needs material and basic documentation to interact with early adopters.
- System architects need models of the chip to assess its architecture effects like bus bandwidth, cache utilization and memory bandwidth.
- Software developers need executable representations of the design under development to start porting operating systems, porting of existing software and development of new code.
- Hardware developers need executable specifications to validate that their implementations are correct.
This paper will review different use models driving requirements for intellectual property (IP) models in different project stages. Different prototyping techniques will be introduced and we will outline that none of them alone is able to meet all requirements users have for IP models. The paper will introduce System Prototyping as a solution, which combines the advantages of virtual prototypes with rapid prototypes.
Prototyping Use Models Drive Different Needs of IP Models
Models of various types are in high demand from day one! The first section of this paper will analyze the different needs for models, which enable prototyping and are driven by the three main use models architecture exploration, software development and verification:
- At the beginning of a project, architecture exploration allows chip architects to make basic decisions with respect to the chip topology, performance, power consumption and on-chip communication structures. For example, information gathered early on cache utilization, performance of processors, bus bandwidth, burst rates and memory utilization drives basic architecture decisions.
- Software developers would ideally like to start their porting of legacy code and development of new software functionality from the get go, i.e. when the hardware development kicks off. They would like to receive an executable representation of the chip, which runs at real time and accurately reflects all the software related interfaces with the hardware (like register images). Depending on the type of software being developed, users may require different accuracy from the underlying model. A real time developer for time critical software requires more accurate representations than the software developer of middleware and applications. For the latter, correct register representations and functional modeling are sufficient.
- In a perfect world, verification of the design starts in parallel as well. Early on, the environment in which the chip resides in, is represented using traces and traffic generators. Early test benches essentially define the use model scenarios for the chip under development.
Characteristics of IP Models for Prototyping
As shown in Figure 1, design teams are adopting different prototyping techniques to deal with architecture analysis, hardware/software co-development and verification:
- Software-based development methods often referred to as virtual platforms
- Hardware-based development methods like emulation and FPGA prototypes
- Actual silicon-based development methods using chips from previous projects or silicon prototypes once engineering samples are available
Figure 1: Techniques to advance parallel development of hardware and software.
When comparing the different prototyping methods prior to silicon availability, eight specific characteristics need to be considered:
Time of Availability: Once the specification for a specific design is frozen, the time it takes for models to become available directly determines how long architects, software developers and verification engineers will have to wait before starting on the project. The later models become available in the design flow compared to real silicon, the lesser their impact and perceived value to the developers will be.
Execution Speed: Developers normally ask for the fastest models available. Ideally the models for software development and for assessment of architectural effects are as fast as the real hardware will execute. For software regressions, execution that is faster than real time can be beneficial. However, execution speed almost always is achieved by omitting model detail, so it often has to be traded off against accuracy.
Accuracy: The type of software being developed, determines how accurate the development methods has to be to represent the actual target hardware, ensuring that issues identified at the hardware/software boundary are not introduced by the development method itself. Similar considerations are true for verification and architecture analysis. For verification more detailed models are required once timing and performance throughput are assessed. For architecture analysis, developers often rely on a mix of different model accuracies. Often only the parts of the system which require detailed analysis are modeled at greater detail.
Production Cost: The cost of a model is comprised of both the actual cost of production, as well as the overhead cost of bringing up hardware/software designs using it. The production cost determines how easy a model can be replicated to furnish developers. In general, software models are very cost effective to produce and distribute as soon as they are developed. They can easily be replicated to furnish large number of software developer or verification regressions. Hardware based representations like FPGA prototypes require availability of hardware for each developer and each regression, often limiting wide proliferation to a large number of software developers.
Bring-up Cost: Any required activity needed to enable models outside of what is absolute necessary to get to silicon can be considered overhead. As an example, using verified RTL and bringing it up in an FPGA prototype requires a certain amount of bring-up time and cost. If C models and virtual prototypes are not part of the standard flow, their development and bring-up is often considered overhead as well. The intensity of the pressure that software teams face to get access to early representations of the hardware determines whether or not the investment in bring-up cost is considered in order to create positive returns. Given that software has become development’s “long pole” and is preventing silicon from entering mass production, the pressure to provide early access for software development is continuously growing.
Debug Insight: The ability to analyze the inside of a design, i.e. being able to access signals, registers and the state of the hardware/software design, is considered crucial. Once the actual silicon is available, it is hard to probe into hardware details. FPGA prototypes provide more flexible and better debug insight. Software simulations expose all available internals and provide best debug insight.
Execution Control: During debug, it is important to stop the representation of the target hardware using assertions in the hardware or breakpoints in the software, especially for designs with multiple processors in which all components have to stop in a synchronized fashion. In the actual target hardware this is almost impossible or at least very difficult to achieve. Software simulations allow most flexible execution control.
System Interfaces: If the target design is a System on Chip, it is important to be able to connect the design under development to real-world interfaces. For example, if a USB interface is involved, for verification and software development it is important to connect the development method to real USB protocol stacks. Similarly, for network and wireless air interfaces, connection of the design representation to real world software is important to execute software development. The actual target hardware runs at full speed and provides the intended access to system interfaces. FPGA prototypes often are sufficient, but are sometimes more difficult to connect as all parts of the design – including the interface - may have to be slowed down appropriately, sometimes making connections unfeasible or requiring specific hardware dealing with the resulting effects. Software simulations may require even further slow-down of the system environment, sometimes making them unfeasible. However, specific software dealing with the effects of slow down is comparatively easy to implement. In addition, the actual interfaces can often be by-passed completely by directly connecting to protocol stacks like USB. Finally, virtualization allows development using interfaces like USB 3.0, which may not broadly exist in hardware yet.
Average elapsed time per development task as percentage of elapsed time from requirements to tape out
Effort in man weeks as percentage of the overall effort for hardware and software development
Figure 2: Key project data of 12 projects (Source: IBS, Synopsys 2007)
In order to properly understand how software and hardware development possibly can overlap using different technologies, it is important to understand the elapsed time for each of the development phases. Table 2 also shows the elapsed time for each of the phases as a percentage of the elapsed time from frozen requirements to tape out (note that the percentages of the individual phases do not add up to 100% as they overlap).
It becomes clear that in average a stable specification – the pre-requisite for virtual platforms - is available 17% after project start, while it takes almost 70% of the time from requirements to tape out to arrive at stable RTL – the pre-requisite for hardware prototypes. Virtual platforms and hardware prototypes are available at very different times of a project and therefore applicable to very different development phases. They are in reality complimentary solutions for early software development and verification.
System Prototyping – Hybrids of Virtual Platforms and FPGA Prototypes
The last section made it clear that none of the prototyping techniques come without disadvantages. As a result, hybrid solutions are emerging, which allow “System Prototyping” of systems and SoCs combining the advantages of software-based and hardware-assisted development methods, as illustrated in Figure 3.
Three technology components are required to enable System Prototyping:
-
Physical interfaces must be available to connect the actual hardware prototype to the workstation running the simulation. PCI Express is a common solution for this.
-
Data must be transported using an agreed upon protocol between the software and hardware worlds. Most hardware-assisted technologies offer proprietary interfaces, and SCE-MI has become a standard in this domain having been developed under the umbrella of Accellera.
-
For conversion from the transaction-level model to the transport interface, transactors are necessary to translate high-level protocols like AXI, OCP and AMBA, into the actual signals driving and observing the blocks that are executing in emulation or FPGA prototypes.
Figure 3: System Prototyping combines the advantages of software-based and hardware-assisted prototypes
System Prototyping Use Models
System prototyping enables five use models:
-
RTL Reuse and Architecture Verification: Mitigating the bring-up effort for virtual platforms, existing RTL from previous projects can be mapped into FPGA prototypes avoiding the modeling effort of potentially complex IP blocks. The cycle accurate execution in FPGA prototypes also increases overall fidelity and allows the replacement of virtual models with RTL to verify that architecture decisions are correct. This mitigates the slow execution of cycle accurate software models.
-
Accelerated Software Execution: Software typically runs on workstations and virtual processor models faster than in FPGA prototypes. System Prototyping with processor models on the workstation allows faster overall execution while maintaining the accuracy of accelerators and peripherals, combining the execution speed advantages of the various development methods.
-
Virtual Platform as testbench for FPGA prototype: System Prototyping mitigates the late availability of hardware-assisted development methods, enabling the efficient re-use of early system-level development efforts for RTL verification and post silicon validation purposes. The virtual platform acts as a testbench for RTL, which avoids duplicate efforts and enhances model re-use.
-
System environment connections: Virtual platforms already provide real-world and virtual I/O for popular interfaces like USB, Ethernet and SATA. Daughter cards in FPGA prototypes provide real world I/O with interfaces to real life streams, like the wireless physical interfaces. System Prototyping with hybrids of virtual platforms and FPGA prototypes allow real world stimulus to be used where most appropriate – at both the transaction and signal levels.
-
“Virtual ICE” connected to FPGA prototype: Software developers often dislike development boards on their desk and instead prefer a development environment that combines a keyboard, screen and their familiar software debugger. Re-use of the virtual development environment allows better access to FPGA prototypes and decreases set-up time. System Prototyping allows the FPGA prototype to be kept remote and increase development environment familiarity for software developers.
System Prototyping Case Study
Figure 4 illustrates a case study in which a complex virtual platform running a Linux Operating System is connected to a USB core in a FPGA prototype. The connection between the virtual world and the FPGA prototype is established using the Accellera SCE-MI Interface.
Figure 4: System Prototype with an USB IP Core mapped into a FPGA Prototype
The SoC models as virtual platforms is specified for 200MHz to 266MHz, runs at 25 MIPS. The USB Core can run from 30-180MHz and runs at 50MHz in the FPGA prototype. The USB core runs freely and produces an interrupt every 125 microseconds. A USB memory stick is connected to the USB port in the PPGA prototype and a picture is copied into it.
This hybrid System Prototype is compared to a pure virtual platform, which runs completely independent of hardware in software only. The IPMate USB board has been modeled at the transaction-level including an ARM920 TLM model and the Synopsys USB OTG 2.0 TLM model. The memory stick is connected directly to Windows drivers allowing development of drivers on a virtual host with real external USB devices.
The other comparison is a virtual platform connected to the USB model in RTL, also purely using software simulation. The ARM920 TLM based virtual platform is connected to the USB OTG 2.0 Model in RTL at full fidelity for verification. The memory stick connected directly to Windows drivers and reconnected to the RTL USB using simulation based transaction-level interfaces. This scenario allows verification of drivers on virtual host with real external USB device while providing full fidelity of the USB model as it runs in RTL.
Two other reference points for comparison are pure RTL simulation and pure execution in an FPGA prototype. These reference points were not possible to assess in this cases study as we did not have the processors and SoC RTL available.
As a result the booting of Linux on the virtual platform, mounting of the USB device in the FPGA prototype and copying of a picture into the memory stick was about 70 times slower than in the actual hardware board for the case of the virtual platform co-simulating with the USB core in RTL. The virtual platform co-executing with the USB core in an FPGA prototype was about 6 times slower than in the real hardware board and the pure virtual platform scenario was about 2 times slower compared to the real hardware board.
However, as outlined in the earlier sections, speed of the models is only one important characteristic. Figure 5 compares the characteristics “time of availability”, “speed”, “accuracy”, “hardware debug” and “software debug” for the six scenarios “pure RTL”, “RTL plus virtual platform”, “pure virtual platform”, “System Prototype combining the virtual platform with a FPGA prototype”, “FPGA prototype” and “Real hardware board”.
Figure5: Comparison of six prototyping scenarios
As outlined the scenario “pure RTL” is fully accurate, allows the best hardware debug but is very slow. Typical comparisons to a combination of RTL with a transaction-level processor model show speedup of up to 32 times. This scenario is so slow that software debug becomes unfeasible, but it is available early during a project, i.e. when RTL is still in development and not yet stable.
The scenario “RTL plus virtual platform” is available even pre-RTL for components which are not coded yet and can be combined with blocks for which RTL exists. It offers good speedup compared to the first scenario, increasing verification efficiency, but while the accuracy can be variably adjusted by choosing RTL and TLM models, the slowest execution will always determine overall execution speed.
The “pure virtual platform” can be much faster especially when it is using loosely timed transaction modeling styles, but the speed always has to be traded off against accuracy. While for general driver development loosely timed platforms are very applicable, for real time software development cycle accurate models are preferable. However, in exchange they slow down the simulation dramatically.
The real hardware at the end of the design flow are fully cycle accurate, represent the real chip and run at real time. However, they are available latest in the design flow.
The pure FPGA prototype is also cycle accurate, runs often close to real time speed but is in comparison to virtual option available pretty late, i.e. when most of the bugs are identified and RTL verification becomes stable. They do offer better control and better debug insight than the real hardware does.
Finally, System Prototypes, the combination of virtual platforms with FPGA prototypes, offer a very feasible tradeoff. They are available earlier, often before all RTL is coded if TLM models are connected to re-used RTL. They are slightly slower that pure virtual platforms and pure FPGA prototypes but let users choose the accuracy by deciding which components to keep at which abstraction level. They also offer good debug insight for both the hardware and the software, which makes them in summary a good tradeoff among the various prototyping solutions.
Related Semiconductor IP
- RISC-V CPU IP
- AES GCM IP Core
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
Related White Papers
- A SystemC based Virtual Prototyping Methodology for Embedded Systems
- FPGA based Complex System Designs: Methodology and Techniques
- A SystemC/TLM based methodology for IP development and FPGA prototyping
- System Verilog based Generic Verification Methodology for IPs/ASICs/SOCs: A Case Study
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference