David vs. Goliath: Can Small Models Win Big with Agentic AI in Hardware Design?
By Shashwat Shankar 1, Subhranshu Pandey 1, Innocent Dengkhw Mochahari 1, Bhabesh Mali 1, Animesh Basak Chowdhury 2, Sukanta Bhattacharjee 1, Chandan Karfa 1
1 Indian Institute of Technology, Guwahati, India
2 NXP USA, Inc.

Abstract
Large Language Model(LLM) inference demands massive compute and energy, making domain-specific tasks expensive and unsustainable. As foundation models keep scaling, we ask: Is bigger always better for hardware design? Our work tests this by evaluating Small Language Models coupled with a curated agentic AI framework on NVIDIA's Comprehensive Verilog Design Problems(CVDP) benchmark. Results show that agentic workflows: through task decomposition, iterative feedback, and correction - not only unlock near-LLM performance at a fraction of the cost but also create learning opportunities for agents, paving the way for efficient, adaptive solutions in complex design tasks.
Keywords: AI assisted Hardware Design, Agentic AI, Large Language Model, Small Language Model, Benchmarking
To read the full article, click here
Related Semiconductor IP
- eUSB2V2.0 Controller + PHY IP
- I/O Library with LVDS in SkyWater 90nm
- 50G PON LDPC Encoder/Decoder
- UALink Controller
- RISC-V Debug & Trace IP
Related Articles
- AI, and the Real Capacity Crisis in Chip Design
- Optimizing Electronics Design With AI Co-Pilots
- The role of cache in AI processor design
- New PCIe Gen6 CXL3.0 retimer: a small chip for big next-gen AI
Latest Articles
- ChipBench: A Next-Step Benchmark for Evaluating LLM Performance in AI-Aided Chip Design
- COVERT: Trojan Detection in COTS Hardware via Statistical Activation of Microarchitectural Events
- A Reconfigurable Framework for AI-FPGA Agent Integration and Acceleration
- Veri-Sure: A Contract-Aware Multi-Agent Framework with Temporal Tracing and Formal Verification for Correct RTL Code Generation
- FlexLLM: Composable HLS Library for Flexible Hybrid LLM Accelerator Design