Nvidia Trains LLM on Chip Design
By Sally Ward-Foxton, EETimes (October 30, 2023)
Nvidia has trained its NeMo large language model (LLM) on internal data to help chip designers with tasks related to chip design, including answering general questions about chip design, summarizing bug documentation, and writing scripts for EDA tools. Nvidia’s chief scientist, Bill Dally, presented the LLM, dubbed ChipNeMo, in his keynote presentation at the International Conference on Computer-Aided Design today.
“The goal here is to make our designers more productive,” Dally told EE Times in an interview prior to the event. “If we even got a couple percent improvement in productivity, this would be worth it. And our goals are actually to do quite a bit better than that.”
Related Semiconductor IP
- 1-port Receiver or Transmitter HDCP 2.3 on HDMI 2.1 ESM
- HDMI 2.0/MHL RX Combo 1P PHY 6Gbps in TSMC 28nm HPC 1.8V, North/South Poly Orientation
- HDMI 2.0 RX PHY in SS 8LPP 1.8V, North/South Poly Orientation
- HDMI 2.0 RX Controller with HDCP
- HDMI 2.0 RX 4P PHY 6Gbps in TSMC 28nm HPM 1.8V, North/South Poly Orientation
Related News
- Visit Quadric at CES to Discover the GPNPU that Solves The Biggest ML Inference Chip Design Challenges
- GUC Tapes Out Complex 3D Stacked Die Design on Advanced FinFET Node Using Cadence Integrity 3D-IC Platform
- Sondrel completes a multi-billion transistor chip design at 5nm
- AccelerComm® Joins Open Compute Project Foundation Focusing on Evenstar Modular Open RAN Radio Unit Reference Design
Latest News
- Cerebras Appoints Tom Lantzsch to Board of Directors
- HPC customer engages Sondrel for high end chip design
- PCI-SIG’s Al Yanes on PCIe 7.0, HPC, and the Future of Interconnects
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- Cadence Unveils Arm-Based System Chiplet