GPUs Dominate AI Compute, FPGAs Move Into the AI Data Path
As AI moves beyond data centers, system constraints are redefining where reconfigurable hardware belongs.
By Yashasvini Razdan, EE Times | January 20, 2026

FPGAs offer programmable and flexible hardware design but require longer design cycles than CPU- or GPU-based systems. As AI workloads scale and push for higher compute density, faster time to deployment, and lower energy per operation in latency- and control-bound workloads, do FPGAs still make sense in the AI stack?
In an interview with EE Times, Esam Elashmawi, chief strategy and marketing officer at Lattice Semiconductor, said FPGAs do not compete with GPUs for AI compute but instead operate as companion devices in the data path, particularly at the edge. “If you need very high performance and you are willing to live with high power, then you can use a GPU or a CPU,” he said. “FPGAs are a good companion to it.”
To read the full article, click here
Related Semiconductor IP
- Very Low Latency BCH Codec
- 5G-NTN Modem IP for Satellite User Terminals
- 400G UDP/IP Hardware Protocol Stack
- AXI-S Protocol Layer for UCIe
- HBM4E Controller IP
Related News
- Achronix Releases Groundbreaking Speedster AC7t800 Mid-Range FPGA, Driving Innovation in AI/ML, 5G/6G and Data Center Applications
- Ceva Partners with Microchip Technology to Enable AI Acceleration Across Edge Devices and Data Center Infrastructure
- Arm Neoverse platform integrates NVIDIA NVLink Fusion to accelerate AI data center adoption
- Marvell to Acquire XConn Technologies, Expanding Leadership in AI Data Center Connectivity
Latest News
- eSOL and Quintauris Partner to Expand Software Integration in RISC-V Automotive Platforms
- PQShield unveils ultra-small PQC embedded security breakthroughs
- CAST Introduces 400 Gbps UDP/IP Hardware Stack IP Core for High-Performance ASIC Designs
- EnSilica: New Contract Wins and Programme Upgrades
- Ceva Launches Next-Generation UWB IP with Extended Range and Higher Throughput