Charting a Productive New Course for AI in Chip Design
Chip designers are finding a valuable ally in artificial intelligence (AI) as they balance demands for greater performance, the challenges of increasing design complexity, engineering resource constraints, and the ever-present need to meet stringent time-to-market targets. By deploying AI-driven chip design and verification, teams are freed from iterative work and can focus instead on product differentiation and power, performance, and area (PPA) enhancements.
What engineers will be able to extract from their chips is becoming increasingly important in our smart everything economy. We’re in a time when essentially every device is a smart device—capturing data and transporting it to very large environments where the data gets crunched, AI models are built, insights are derived, and data gets pushed back to the edge to enhance productivity and quality of life. These demands on our devices are pushing the semiconductor industry toward a trillion-dollar trajectory. At the same time, the engineering talent shortage is real. While investments such as the CHIPS and Science Act of 2022 will eventually help fill the talent pipeline, so too will AI.
As a testament to how AI is rapidly becoming mainstream in its application in electronic design automation (EDA) and transforming the semiconductor world, the award-winning Synopsys DSO.ai AI application for chip design has notched its first 100 production tapeouts. This marks a significant milestone in an industry where AI has been the talk of the town but is only starting to make serious inroads.
Synopsys DSO.ai searches design spaces autonomously to discover optimal PPA solutions, massively scaling the exploration of choices in chip design workflows and using reinforcement learning to automate many otherwise menial tasks. Everyone from new to seasoned engineers stand to benefit, as the solution is somewhat like having an expert engineer in a box. Some of our customers are using DSO.ai in the cloud, tapping into the flexibility, scalability, and elasticity that on-premises and public cloud vendors offer to accommodate massive workloads and drive productivity to new heights. We’ve seen some impressive results from these initial customer use cases: productivity boosts of more than 3x, power reductions of up to 15%, substantial die size reductions, and less use of overall resources.
To read the full article, click here
Related Semiconductor IP
- Simulation VIP for AMBA CHI-C2C
- Process/Voltage/Temperature Sensor with Self-calibration (Supply voltage 1.2V) - TSMC 3nm N3P
- USB 20Gbps Device Controller
- SM4 Cipher Engine
- Ultra-High-Speed Time-Interleaved 7-bit 64GSPS ADC on 3nm
Related Blogs
- AI Is Driving a New Frontier in Chip Design
- LPDDR6: A New Standard and Memory Choice for AI Data Center Applications
- Silicon-proven LVTS for 2nm: a new era of accuracy and integration in thermal monitoring
- Why AI Requires a New Chip Architecture
Latest Blogs
- Accelerate Automotive System Design with Cadence AI-Driven DSPs
- What Makes FPGA Architecture Ideal for Ultra-Low-Latency Systems?
- Introducing agileSecure anti-tamper security portfolio
- Same Chip, Two Destinies: How Power Profiles Improve With On-Chip Monitoring
- A Hybrid Subsystem Architecture to Elevate Edge AI