Nvidia Trains LLM on Chip Design
By Sally Ward-Foxton, EETimes (October 30, 2023)
Nvidia has trained its NeMo large language model (LLM) on internal data to help chip designers with tasks related to chip design, including answering general questions about chip design, summarizing bug documentation, and writing scripts for EDA tools. Nvidia’s chief scientist, Bill Dally, presented the LLM, dubbed ChipNeMo, in his keynote presentation at the International Conference on Computer-Aided Design today.
“The goal here is to make our designers more productive,” Dally told EE Times in an interview prior to the event. “If we even got a couple percent improvement in productivity, this would be worth it. And our goals are actually to do quite a bit better than that.”
To read the full article, click here
Related Semiconductor IP
- xSPI Multiple Bus Memory Controller
- MIPI CSI-2 IP
- PCIe Gen 7 Verification IP
- WIFI 2.4G/5G Low Power Wakeup Radio IP
- Radar IP
Related News
- Synopsys Accelerates Chip Design with NVIDIA Grace Blackwell and AI to Speed Electronic Design Automation
- Siemens collaborates with TSMC on design tool certifications for the foundry's newest processes and other enablement milestones
- Kalray Joins Arm Total Design, Extending Collaboration with Arm on Accelerated AI Processing
- Synopsys Accelerates Next-Level Chip Innovation on TSMC Advanced Processes
Latest News
- Premier ASIC and SoC Design Partner, Sondrel, Rebrands as Aion Silicon
- Intel Financial Risks, Layoffs, Foundry Ambitions
- BOS Semiconductors to Partner with Intel to Accelerate Automotive AI Innovation
- China Takes the Lead in RF Front-End Patent Activity: RadRock and Others Surge Behind Murata
- Arteris Wins Two Gold and One Silver Stevie® Awards in the 2025 American Business Awards®