Leveraging AI to Optimize the Debug Productivity and Verification Throughput
The impact of semiconductors on various sectors cannot be overstated. Semiconductors have revolutionized our operations from the automotive industry to IoT, communication, and HPC. However, as demand for high performance and instant gratification increases, the complexity of SoCs has grown significantly. With hundreds of IPs integrated into SoCs, bugs have become more common and challenging to fix. The verification process at the SoC level is causing significant delays in tapeout schedules. Detecting bugs within the allocated budget and timeline is becoming increasingly difficult, especially with the reduced geometries and increased gate counts. With SoC design engineers spending more than 70% of their verification time, detecting a single bug takes an average of 16-20 engineering hours. Imagine the impact of having 1000 bugs in a design!
To read the full article, click here
Related Semiconductor IP
- eDP 2.0 Verification IP
- Gen#2 of 64-bit RISC-V core with out-of-order pipeline based complex
- LLM AI IP Core
- Post-Quantum Digital Signature IP Core
- Compact Embedded RISC-V Processor
Related Blogs
- UA Link vs Interlaken: What you need to know about the right protocol for AI and HPC interconnect fabrics
- Ultra Ethernet Consortium Set to Enable Scaling of Networking Interconnects for AI and HPC
- Reducing Manual Effort and Achieving Better Chip Verification Coverage with AI and Formal Techniques
- Say Goodbye to Limits and Hello to Freedom of Scalability in the MIPS P8700
Latest Blogs
- Enhancing PCIe6.0 Performance: Flit Sequence Numbers and Selective NAK Explained
- Smarter ASICs and SoCs: Unlocking Real-World Connectivity with eFPGA and Data Converters
- RISC-V Takes First Step Toward International Standardization as ISO/IEC JTC1 Grants PAS Submitter Status
- Running Optimized PyTorch Models on Cadence DSPs with ExecuTorch
- PCIe 6.x: Synopsys IP Selected as First Gold System for Compliance Testing