Leveraging AI to Optimize the Debug Productivity and Verification Throughput
The impact of semiconductors on various sectors cannot be overstated. Semiconductors have revolutionized our operations from the automotive industry to IoT, communication, and HPC. However, as demand for high performance and instant gratification increases, the complexity of SoCs has grown significantly. With hundreds of IPs integrated into SoCs, bugs have become more common and challenging to fix. The verification process at the SoC level is causing significant delays in tapeout schedules. Detecting bugs within the allocated budget and timeline is becoming increasingly difficult, especially with the reduced geometries and increased gate counts. With SoC design engineers spending more than 70% of their verification time, detecting a single bug takes an average of 16-20 engineering hours. Imagine the impact of having 1000 bugs in a design!
To read the full article, click here
Related Semiconductor IP
- NFC wireless interface supporting ISO14443 A and B with EEPROM on SMIC 180nm
- DDR5 MRDIMM PHY and Controller
- RVA23, Multi-cluster, Hypervisor and Android
- HBM4E PHY and controller
- LZ4/Snappy Data Compressor
Related Blogs
- How AI Is Enabling Digital Design Retargeting to Maximize Productivity
- Partitioning Strategies to Optimize AI Inference for Multi-Core Platforms
- Synopsys and Alchip Collaborate to Streamline the Path to Multi-die Success with Soft Chiplets
- Imec and Synopsys Lower the Barriers to 2nm Technology With New Pathfinding Design Kit
Latest Blogs
- lowRISC Tackles Post-Quantum Cryptography Challenges through Research Collaborations
- How to Solve the Size, Weight, Power and Cooling Challenge in Radar & Radio Frequency Modulation Classification
- Programmable Hardware Delivers 10,000X Improvement in Verification Speed over Software for Forward Error Correction
- The Integrated Design Challenge: Developing Chip, Software, and System in Unison
- Introducing Mi-V RV32 v4.0 Soft Processor: Enhanced RISC-V Power