GPUs Dominate AI Compute, FPGAs Move Into the AI Data Path
As AI moves beyond data centers, system constraints are redefining where reconfigurable hardware belongs.
By Yashasvini Razdan, EE Times | January 20, 2026

FPGAs offer programmable and flexible hardware design but require longer design cycles than CPU- or GPU-based systems. As AI workloads scale and push for higher compute density, faster time to deployment, and lower energy per operation in latency- and control-bound workloads, do FPGAs still make sense in the AI stack?
In an interview with EE Times, Esam Elashmawi, chief strategy and marketing officer at Lattice Semiconductor, said FPGAs do not compete with GPUs for AI compute but instead operate as companion devices in the data path, particularly at the edge. “If you need very high performance and you are willing to live with high power, then you can use a GPU or a CPU,” he said. “FPGAs are a good companion to it.”
To read the full article, click here
Related Semiconductor IP
- DeWarp IP
- 6-bit, 12 GSPS Flash ADC - GlobalFoundries 22nm
- LunaNet AFS LDPC Encoder and Decoder IP Core
- ReRAM NVM in DB HiTek 130nm BCD
- UFS 5.0 Host Controller IP
Related News
- Achronix Releases Groundbreaking Speedster AC7t800 Mid-Range FPGA, Driving Innovation in AI/ML, 5G/6G and Data Center Applications
- Arm Neoverse platform integrates NVIDIA NVLink Fusion to accelerate AI data center adoption
- Marvell to Acquire XConn Technologies, Expanding Leadership in AI Data Center Connectivity
- SiFive to Power Next-Gen RISC-V AI Data Centers with NVIDIA NVLink™ Fusion
Latest News
- Hardware Root of Trust Essential for AI Chip Integrity
- AI Compute Demand Drives 44% YoY Growth for Top 10 Global Fabless IC Firms in 2025
- IBM Announces Strategic Collaboration with Arm to Shape the Future of Enterprise Computing
- Rambus Unveils HBM4E Controller: 16 GT/s, 2,048-Bit Interface, Enabling C-HBM4E
- AimFuture, a Leader in Home Appliance NPUs, to Integrate Mesacure Company’s AI Algorithms