Charting a Productive New Course for AI in Chip Design
Chip designers are finding a valuable ally in artificial intelligence (AI) as they balance demands for greater performance, the challenges of increasing design complexity, engineering resource constraints, and the ever-present need to meet stringent time-to-market targets. By deploying AI-driven chip design and verification, teams are freed from iterative work and can focus instead on product differentiation and power, performance, and area (PPA) enhancements.
What engineers will be able to extract from their chips is becoming increasingly important in our smart everything economy. We’re in a time when essentially every device is a smart device—capturing data and transporting it to very large environments where the data gets crunched, AI models are built, insights are derived, and data gets pushed back to the edge to enhance productivity and quality of life. These demands on our devices are pushing the semiconductor industry toward a trillion-dollar trajectory. At the same time, the engineering talent shortage is real. While investments such as the CHIPS and Science Act of 2022 will eventually help fill the talent pipeline, so too will AI.
As a testament to how AI is rapidly becoming mainstream in its application in electronic design automation (EDA) and transforming the semiconductor world, the award-winning Synopsys DSO.ai AI application for chip design has notched its first 100 production tapeouts. This marks a significant milestone in an industry where AI has been the talk of the town but is only starting to make serious inroads.
Synopsys DSO.ai searches design spaces autonomously to discover optimal PPA solutions, massively scaling the exploration of choices in chip design workflows and using reinforcement learning to automate many otherwise menial tasks. Everyone from new to seasoned engineers stand to benefit, as the solution is somewhat like having an expert engineer in a box. Some of our customers are using DSO.ai in the cloud, tapping into the flexibility, scalability, and elasticity that on-premises and public cloud vendors offer to accommodate massive workloads and drive productivity to new heights. We’ve seen some impressive results from these initial customer use cases: productivity boosts of more than 3x, power reductions of up to 15%, substantial die size reductions, and less use of overall resources.
To read the full article, click here
Related Semiconductor IP
- Configurable CPU tailored precisely to your needs
- Ultra high-performance low-power ADC
- HiFi iQ DSP
- CXL 4 Verification IP
- JESD204E Controller IP
Related Blogs
- LPDDR6: A New Standard and Memory Choice for AI Data Center Applications
- Silicon-proven LVTS for 2nm: a new era of accuracy and integration in thermal monitoring
- A New Era for Edge AI: Codasip’s Custom Vector Processor Drives the SYCLOPS Mission
- Physical AI at the Edge: A New Chapter in Device Intelligence
Latest Blogs
- The Memory Imperative for Next-Generation AI Accelerator SoCs
- Leadership in CAN XL strengthens Bosch’s position in vehicle communication
- Validating UPLI Protocol Across Topologies with Cadence UALink VIP
- Cadence Tapes Out 32GT/s UCIe IP Subsystem on Samsung 4nm Technology
- LPDDR6 vs. LPDDR5 and LPDDR5X: What’s the Difference?