Charting a Productive New Course for AI in Chip Design
Chip designers are finding a valuable ally in artificial intelligence (AI) as they balance demands for greater performance, the challenges of increasing design complexity, engineering resource constraints, and the ever-present need to meet stringent time-to-market targets. By deploying AI-driven chip design and verification, teams are freed from iterative work and can focus instead on product differentiation and power, performance, and area (PPA) enhancements.
What engineers will be able to extract from their chips is becoming increasingly important in our smart everything economy. We’re in a time when essentially every device is a smart device—capturing data and transporting it to very large environments where the data gets crunched, AI models are built, insights are derived, and data gets pushed back to the edge to enhance productivity and quality of life. These demands on our devices are pushing the semiconductor industry toward a trillion-dollar trajectory. At the same time, the engineering talent shortage is real. While investments such as the CHIPS and Science Act of 2022 will eventually help fill the talent pipeline, so too will AI.
As a testament to how AI is rapidly becoming mainstream in its application in electronic design automation (EDA) and transforming the semiconductor world, the award-winning Synopsys DSO.ai AI application for chip design has notched its first 100 production tapeouts. This marks a significant milestone in an industry where AI has been the talk of the town but is only starting to make serious inroads.
Synopsys DSO.ai searches design spaces autonomously to discover optimal PPA solutions, massively scaling the exploration of choices in chip design workflows and using reinforcement learning to automate many otherwise menial tasks. Everyone from new to seasoned engineers stand to benefit, as the solution is somewhat like having an expert engineer in a box. Some of our customers are using DSO.ai in the cloud, tapping into the flexibility, scalability, and elasticity that on-premises and public cloud vendors offer to accommodate massive workloads and drive productivity to new heights. We’ve seen some impressive results from these initial customer use cases: productivity boosts of more than 3x, power reductions of up to 15%, substantial die size reductions, and less use of overall resources.
To read the full article, click here
Related Semiconductor IP
- Very Low Latency BCH Codec
- 5G-NTN Modem IP for Satellite User Terminals
- 400G UDP/IP Hardware Protocol Stack
- AXI-S Protocol Layer for UCIe
- HBM4E Controller IP
Related Blogs
- LPDDR6: A New Standard and Memory Choice for AI Data Center Applications
- Silicon-proven LVTS for 2nm: a new era of accuracy and integration in thermal monitoring
- A New Era for Edge AI: Codasip’s Custom Vector Processor Drives the SYCLOPS Mission
- Physical AI at the Edge: A New Chapter in Device Intelligence
Latest Blogs
- Embedded Security explained: Post-Quantum Cryptography (PQC) for embedded Systems
- Accreditation Without Compromise: Making eFPGA Assurable for Decades
- Synopsys Delivers First Complete UFS 5.0 and M‑PHY v6.0 IP Solution for Next‑Gen Storage
- World First: Synopsys MACsec IP Receives ISO/PAS 8800 Certification for Automotive and Physical AI Security
- Last-level cache has become a critical SoC design element