LLMs for Secure Hardware Design and Related Problems: Opportunities and Challenges

By Johann Knechtel 1, Ozgur Sinanoglu 1 and Ramesh Karri 2
1 New York University Abu Dhabi
2 NYU Tandon School of Engineering

Abstract

The integration of Large Language Models (LLMs) into Electronic Design Automation (EDA) and hardware security is rapidly reshaping the semiconductor industry. While LLMs offer unprecedented capabilities in generating Register Transfer Level (RTL) code, automating testbenches, and bridging the semantic gap between high-level specifications and silicon, they simultaneously introduce severe vulnerabilities. This comprehensive review provides an in-depth analysis of the state-of-the-art in LLM-driven hardware design, organized around key advancements in EDA synthesis, hardware trust, design for security, and education. We systematically expand on the methodologies of recent breakthroughs -- from reasoning-driven synthesis and multi-agent vulnerability extraction to data contamination and adversarial machine learning (ML) evasion. We integrate general discussions on critical countermeasures, such as dynamic benchmarking to combat data memorization and aggressive red-teaming for robust security assessment. Finally, we synthesize cross-cutting lessons learned to guide future research toward secure, trustworthy, and autonomous design ecosystems.

Index Terms — Large Language Models, Hardware Security, Electronic Design Automation, Logic Locking, Hardware Trojans, Machine Unlearning, Multi-Agent Systems, Red-Teaming

To read the full article, click here

×
Semiconductor IP