Nvidia Trains LLM on Chip Design
By Sally Ward-Foxton, EETimes (October 30, 2023)
Nvidia has trained its NeMo large language model (LLM) on internal data to help chip designers with tasks related to chip design, including answering general questions about chip design, summarizing bug documentation, and writing scripts for EDA tools. Nvidia’s chief scientist, Bill Dally, presented the LLM, dubbed ChipNeMo, in his keynote presentation at the International Conference on Computer-Aided Design today.
“The goal here is to make our designers more productive,” Dally told EE Times in an interview prior to the event. “If we even got a couple percent improvement in productivity, this would be worth it. And our goals are actually to do quite a bit better than that.”
To read the full article, click here
Related Semiconductor IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- 1G BASE-T Ethernet Verification IP
- Network-on-Chip (NoC)
- Microsecond Channel (MSC/MSC-Plus) Controller
- 12-bit, 400 MSPS SAR ADC - TSMC 12nm FFC
Related News
- Synopsys Accelerates Chip Design with NVIDIA Grace Blackwell and AI to Speed Electronic Design Automation
- Cadence Advances Design and Engineering for Europe’s Manufacturers on NVIDIA Industrial AI Cloud
- Reports Indicate TSMC to Tighten Scrutiny on Chinese AI Chip Clients; Potential Revenue Impact Between 5% to 8%
- Jmem Tek and Andes Technology Partner on the World’ s First Quantum-Secure RISC-V Chip
Latest News
- Virtusa Acquires Bengaluru based SmartSoC Solutions, Establishing Full-Stack Service Offering from Chip to Cloud and Driving Expansion into the Semiconductor Industry
- Consumer Electronics and AI Product Launches Lift 3Q25 Top-10 Foundry Revenue by 8.1%, Says TrendForce
- Joachim Kunkel Joins Quadric Board of Directors
- RaiderChip NPU leads edge LLM benchmarks against GPUs and CPUs in academic research paper
- SEMIFIVE Secures AI Semiconductor Design Projects in Japan, Accelerating Global Expansion with New Local Subsidiary