Nvidia Trains LLM on Chip Design
By Sally Ward-Foxton, EETimes (October 30, 2023)
Nvidia has trained its NeMo large language model (LLM) on internal data to help chip designers with tasks related to chip design, including answering general questions about chip design, summarizing bug documentation, and writing scripts for EDA tools. Nvidia’s chief scientist, Bill Dally, presented the LLM, dubbed ChipNeMo, in his keynote presentation at the International Conference on Computer-Aided Design today.
“The goal here is to make our designers more productive,” Dally told EE Times in an interview prior to the event. “If we even got a couple percent improvement in productivity, this would be worth it. And our goals are actually to do quite a bit better than that.”
To read the full article, click here
Related Semiconductor IP
- USB 20Gbps Device Controller
- 25MHz to 4.0GHz Fractional-N RC PLL Synthesizer on TSMC 3nm N3P
- AGILEX 7 R-Tile Gen5 NVMe Host IP
- 100G PAM4 Serdes PHY - 14nm
- Bluetooth Low Energy Subsystem IP
Related News
- Synopsys Accelerates Chip Design with NVIDIA Grace Blackwell and AI to Speed Electronic Design Automation
- Cadence Advances Design and Engineering for Europe’s Manufacturers on NVIDIA Industrial AI Cloud
- Cadence Expands System IP Portfolio with Network on Chip to Optimize Electronic System Connectivity
- Reports Indicate TSMC to Tighten Scrutiny on Chinese AI Chip Clients; Potential Revenue Impact Between 5% to 8%
Latest News
- Dnotitia Unveils VDPU IP, the First Accelerator IP for Vector Database
- Ambient Scientific AI-native processor for edge applications offers 100x power and performance improvements over 32-bit MCUs
- Qualitas Semiconductor Signs PCIe Gen 4.0 PHY IP License Agreement with Leading Chinese Fabless Customer
- Signal Edge Solutions Joins AMD Embedded Partner Program
- MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency