Nvidia Trains LLM on Chip Design
By Sally Ward-Foxton, EETimes (October 30, 2023)
Nvidia has trained its NeMo large language model (LLM) on internal data to help chip designers with tasks related to chip design, including answering general questions about chip design, summarizing bug documentation, and writing scripts for EDA tools. Nvidia’s chief scientist, Bill Dally, presented the LLM, dubbed ChipNeMo, in his keynote presentation at the International Conference on Computer-Aided Design today.
“The goal here is to make our designers more productive,” Dally told EE Times in an interview prior to the event. “If we even got a couple percent improvement in productivity, this would be worth it. And our goals are actually to do quite a bit better than that.”
To read the full article, click here
Related Semiconductor IP
- JESD204E Controller IP
- eUSB2V2.0 Controller + PHY IP
- I/O Library with LVDS in SkyWater 90nm
- 50G PON LDPC Encoder/Decoder
- UALink Controller
Related News
- Cadence Advances Design and Engineering for Europe’s Manufacturers on NVIDIA Industrial AI Cloud
- Jmem Tek and Andes Technology Partner on the World’ s First Quantum-Secure RISC-V Chip
- MediaTek Adopts AI-Driven Cadence Virtuoso Studio and Spectre Simulation on NVIDIA Accelerated Computing Platform for 2nm Designs
- China Bets on Homegrown Chip Tech With RISC-V Push
Latest News
- TES offers a programmable precision DC voltage amplifier IP for X-FAB’s XT018 technology
- Amid rising cyber threats, EnSilica joins the CHERI Alliance to enable safe & secure silicon
- TSMC 3-nm Upgrade in Japan to Catch up With Demand
- Imagination Technologies appoints Markus Mosen as Chief Executive Officer
- Barcelona Zettascale Lab advances European technological sovereignty as Cinco Ranch TC1 chip passes validation