Infusing Speed and Visibility Into ASIC Verification
By Mario Larouche, Synplicity
(02/19/08, 01:48:00 PM EST) -- Embedded.com
High-performance, high-capacity FPGAs continue to experience an exponential growth in usage, both in their role as prototypes for ASIC/SoC designs and as systems in their own right. These designs typically involve complex combinations of hardware and embedded software (and also, possibly, application software).
This is resulting in a verification crisis because detecting, isolating, debugging, and correcting bugs now consumes significantly more time, money, and engineering resources than creating the design in the first place.
The problem is that bugs in this class of design can be buried deep in the system and can manifest themselves in non-deterministic ways based on complex and unexpected interactions between the hardware and the software. Simply detecting these bugs can require extremely long and time-consuming test sequences.
Once a problem is detected, actually debugging the design requires a significant amount of time and effort. Furthermore, when verification tests are performed using real-world data, such as a live video stream from a digital camera, an intermittent bug may be difficult, if not impossible, to replicate.
There are a variety of verification options available to engineers, including software simulation, hardware simulation acceleration, hardware emulation, and FPGA-based prototypes. Each approach has its advantages and disadvantages (Table 1 below).
RTL simulators, for example, are relatively inexpensive, but full-system verification performed using this approach is extremely slow. One major advantage of software simulation is visibility into the design. Having said this, as more signals are monitored and the values of these signals are captured, simulation slows even farther.
(02/19/08, 01:48:00 PM EST) -- Embedded.com
High-performance, high-capacity FPGAs continue to experience an exponential growth in usage, both in their role as prototypes for ASIC/SoC designs and as systems in their own right. These designs typically involve complex combinations of hardware and embedded software (and also, possibly, application software).
This is resulting in a verification crisis because detecting, isolating, debugging, and correcting bugs now consumes significantly more time, money, and engineering resources than creating the design in the first place.
The problem is that bugs in this class of design can be buried deep in the system and can manifest themselves in non-deterministic ways based on complex and unexpected interactions between the hardware and the software. Simply detecting these bugs can require extremely long and time-consuming test sequences.
Once a problem is detected, actually debugging the design requires a significant amount of time and effort. Furthermore, when verification tests are performed using real-world data, such as a live video stream from a digital camera, an intermittent bug may be difficult, if not impossible, to replicate.
There are a variety of verification options available to engineers, including software simulation, hardware simulation acceleration, hardware emulation, and FPGA-based prototypes. Each approach has its advantages and disadvantages (Table 1 below).
RTL simulators, for example, are relatively inexpensive, but full-system verification performed using this approach is extremely slow. One major advantage of software simulation is visibility into the design. Having said this, as more signals are monitored and the values of these signals are captured, simulation slows even farther.
To read the full article, click here
Related Semiconductor IP
- JESD204E Controller IP
- eUSB2V2.0 Controller + PHY IP
- I/O Library with LVDS in SkyWater 90nm
- 50G PON LDPC Encoder/Decoder
- UALink Controller
Related Articles
- Test engineers must join ASIC flow early
- Altera courts ASIC designers with block-based Stratix PLD
- Retargeting IP -> Clearing ASIC obsolescence hurdles
- Retargeting IP -> ASIC generation revamped for IP reuse
Latest Articles
- Crypto-RV: High-Efficiency FPGA-Based RISC-V Cryptographic Co-Processor for IoT Security
- In-Pipeline Integration of Digital In-Memory-Computing into RISC-V Vector Architecture to Accelerate Deep Learning
- QMC: Efficient SLM Edge Inference via Outlier-Aware Quantization and Emergent Memories Co-Design
- ChipBench: A Next-Step Benchmark for Evaluating LLM Performance in AI-Aided Chip Design
- COVERT: Trojan Detection in COTS Hardware via Statistical Activation of Microarchitectural Events