GPUs Dominate AI Compute, FPGAs Move Into the AI Data Path
As AI moves beyond data centers, system constraints are redefining where reconfigurable hardware belongs.
By Yashasvini Razdan, EE Times | January 20, 2026

FPGAs offer programmable and flexible hardware design but require longer design cycles than CPU- or GPU-based systems. As AI workloads scale and push for higher compute density, faster time to deployment, and lower energy per operation in latency- and control-bound workloads, do FPGAs still make sense in the AI stack?
In an interview with EE Times, Esam Elashmawi, chief strategy and marketing officer at Lattice Semiconductor, said FPGAs do not compete with GPUs for AI compute but instead operate as companion devices in the data path, particularly at the edge. “If you need very high performance and you are willing to live with high power, then you can use a GPU or a CPU,” he said. “FPGAs are a good companion to it.”
To read the full article, click here
Related Semiconductor IP
- HiFi iQ DSP
- CXL 4 Verification IP
- JESD204E Controller IP
- eUSB2V2.0 Controller + PHY IP
- I/O Library with LVDS in SkyWater 90nm
Related News
- Achronix Releases Groundbreaking Speedster AC7t800 Mid-Range FPGA, Driving Innovation in AI/ML, 5G/6G and Data Center Applications
- Credo Joins Arm Total Design to Accelerate the Development of Custom Silicon for AI Data Centers
- Arm Sets the Standard for Open, Converged AI Data Centers
- MIPS I8500 Processor Orchestrates Data Movement for the AI Era
Latest News
- A new CEO, a cleared deck: Is Imagination finally ready for a deal?
- SkyeChip’s UCIe 3.0 Advanced Package PHY IP for SF4X Listed on Samsung Foundry CONNECT
- Victor Peng Joins Rambus Board of Directors
- Arteris Announces Financial Results for the Fourth Quarter and Full Year 2025 and Estimated First Quarter and Full Year 2026 Guidance
- Arteris Network-on-Chip Technology Achieves Deployment Milestone of 4 Billion Chips and Chiplets