Charting a Productive New Course for AI in Chip Design
Chip designers are finding a valuable ally in artificial intelligence (AI) as they balance demands for greater performance, the challenges of increasing design complexity, engineering resource constraints, and the ever-present need to meet stringent time-to-market targets. By deploying AI-driven chip design and verification, teams are freed from iterative work and can focus instead on product differentiation and power, performance, and area (PPA) enhancements.
What engineers will be able to extract from their chips is becoming increasingly important in our smart everything economy. We’re in a time when essentially every device is a smart device—capturing data and transporting it to very large environments where the data gets crunched, AI models are built, insights are derived, and data gets pushed back to the edge to enhance productivity and quality of life. These demands on our devices are pushing the semiconductor industry toward a trillion-dollar trajectory. At the same time, the engineering talent shortage is real. While investments such as the CHIPS and Science Act of 2022 will eventually help fill the talent pipeline, so too will AI.
As a testament to how AI is rapidly becoming mainstream in its application in electronic design automation (EDA) and transforming the semiconductor world, the award-winning Synopsys DSO.ai AI application for chip design has notched its first 100 production tapeouts. This marks a significant milestone in an industry where AI has been the talk of the town but is only starting to make serious inroads.
Synopsys DSO.ai searches design spaces autonomously to discover optimal PPA solutions, massively scaling the exploration of choices in chip design workflows and using reinforcement learning to automate many otherwise menial tasks. Everyone from new to seasoned engineers stand to benefit, as the solution is somewhat like having an expert engineer in a box. Some of our customers are using DSO.ai in the cloud, tapping into the flexibility, scalability, and elasticity that on-premises and public cloud vendors offer to accommodate massive workloads and drive productivity to new heights. We’ve seen some impressive results from these initial customer use cases: productivity boosts of more than 3x, power reductions of up to 15%, substantial die size reductions, and less use of overall resources.
To read the full article, click here
Related Semiconductor IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- Parameterizable compact BCH codec
- 1G BASE-T Ethernet Verification IP
- Network-on-Chip (NoC)
- Microsecond Channel (MSC/MSC-Plus) Controller
Related Blogs
- AI Is Driving a New Frontier in Chip Design
- LPDDR6: A New Standard and Memory Choice for AI Data Center Applications
- Silicon-proven LVTS for 2nm: a new era of accuracy and integration in thermal monitoring
- A New Era for Edge AI: Codasip’s Custom Vector Processor Drives the SYCLOPS Mission
Latest Blogs
- Physical AI at the Edge: A New Chapter in Device Intelligence
- Rivian’s autonomy breakthrough built with Arm: the compute foundation for the rise of physical AI
- AV1 Image File Format Specification Gets an Upgrade with AVIF v1.2.0
- Industry’s First End-to-End eUSB2V2 Demo for Edge AI and AI PCs at CES
- Integrating Post-Quantum Cryptography (PQC) on Arty-Z7