Nvidia Trains LLM on Chip Design
By Sally Ward-Foxton, EETimes (October 30, 2023)
Nvidia has trained its NeMo large language model (LLM) on internal data to help chip designers with tasks related to chip design, including answering general questions about chip design, summarizing bug documentation, and writing scripts for EDA tools. Nvidia’s chief scientist, Bill Dally, presented the LLM, dubbed ChipNeMo, in his keynote presentation at the International Conference on Computer-Aided Design today.
“The goal here is to make our designers more productive,” Dally told EE Times in an interview prior to the event. “If we even got a couple percent improvement in productivity, this would be worth it. And our goals are actually to do quite a bit better than that.”
To read the full article, click here
Related Semiconductor IP
- LPDDR6/5X/5 PHY V2 - Intel 18A-P
- ML-KEM Key Encapsulation & ML-DSA Digital Signature Engine
- MIPI SoundWire I3S Peripheral IP
- ML-DSA Digital Signature Engine
- P1619 / 802.1ae (MACSec) GCM/XTS/CBC-AES Core
Related News
- Synopsys Accelerates Chip Design with NVIDIA Grace Blackwell and AI to Speed Electronic Design Automation
- Cadence Advances Design and Engineering for Europe’s Manufacturers on NVIDIA Industrial AI Cloud
- Reports Indicate TSMC to Tighten Scrutiny on Chinese AI Chip Clients; Potential Revenue Impact Between 5% to 8%
- Jmem Tek and Andes Technology Partner on the World’ s First Quantum-Secure RISC-V Chip
Latest News
- SkyWater Technology and QuamCore Announce Collaboration to Fabricate Digital Superconducting Controller for Scalable Quantum Computing
- Aion Silicon Expands Barcelona Design Center to Meet Surging Demand for ASIC and SoC Solutions
- UMC Reports Sales for October 2025
- Arm Q2 FYE26 revenue surpasses $1 billion for third consecutive quarter
- Perceptia Updates Design Kit for pPLL03 on GlobalFoundries 22FDX Platform