Leveraging AI to Optimize the Debug Productivity and Verification Throughput
The impact of semiconductors on various sectors cannot be overstated. Semiconductors have revolutionized our operations from the automotive industry to IoT, communication, and HPC. However, as demand for high performance and instant gratification increases, the complexity of SoCs has grown significantly. With hundreds of IPs integrated into SoCs, bugs have become more common and challenging to fix. The verification process at the SoC level is causing significant delays in tapeout schedules. Detecting bugs within the allocated budget and timeline is becoming increasingly difficult, especially with the reduced geometries and increased gate counts. With SoC design engineers spending more than 70% of their verification time, detecting a single bug takes an average of 16-20 engineering hours. Imagine the impact of having 1000 bugs in a design!
To read the full article, click here
Related Semiconductor IP
- Process/Voltage/Temperature Sensor with Self-calibration (Supply voltage 1.2V) - TSMC 3nm N3P
- USB 20Gbps Device Controller
- SM4 Cipher Engine
- Ultra-High-Speed Time-Interleaved 7-bit 64GSPS ADC on 3nm
- Fault Tolerant DDR2/DDR3/DDR4 Memory controller
Related Blogs
- UA Link vs Interlaken: What you need to know about the right protocol for AI and HPC interconnect fabrics
- How AI Is Enabling Digital Design Retargeting to Maximize Productivity
- Partitioning Strategies to Optimize AI Inference for Multi-Core Platforms
- Ultra Ethernet Consortium Set to Enable Scaling of Networking Interconnects for AI and HPC
Latest Blogs
- Shaping the Future of Semiconductor Design Through Collaboration: Synopsys Wins Multiple TSMC OIP Partner of the Year Awards
- Pushing the Boundaries of Memory: What’s New with Weebit and AI
- Root of Trust: A Security Essential for Cyber Defense
- Evolution of AMBA AXI Protocol: An Introduction to the Issue L Update
- An Introduction to AMBA CHI Chip-to-Chip (C2C) Protocol