What's Really Behind the Adoption of eFPGA?
By Andy Jaros, VP of Flex Logix
System companies are taking a more proactive role in codesigning their hardware and software roadmaps, so it’s no surprise that they are also driving the adoption of embedded FPGAs (eFPGA.)
But why and why has it taken so long?
Today, most system companies leverage FPGAs to offload intensive compute workloads from the main processor or provide broader IO capability than any packaged ASIC can support. In these examples, the FPGA is used to connect any hard wired accelerator to the ASIC processor subsystem or facilitates the connection between the ASIC and the system’s bus. By definition, the FPGA is universal as it is a blank programmable fabric regardless if it’s from Intel, Xilinx, Lattice, etc., which makes it invaluable to system companies who can’t get the functionality they really need or want from their semiconductor partner. Plus, they get the added benefit that they don’t have to share their proprietary circuitry with their semiconductor partner or other companies in the supply chain.
To read the full article, click here
Related Semiconductor IP
- eFPGA
- eFPGA on GlobalFoundries GF12LP
- Heterogeneous eFPGA architecture with LUTs, DSPs, and BRAMs on GlobalFoundries GF12LP
- eFPGA Soft IP
- Radiation-Hardened eFPGA
Related White Papers
- The Future of Embedded FPGAs - eFPGA: The Proof is in the Tape Out
- eFPGA Saved Us Millions of Dollars. It Can Do the Same for You
- Seven Key Advantages of Implementing eFPGA with Soft IP vs. Hard IP
- Unlocking the Power of Digital Twins in ASICs with Adaptable eFPGA Hardware
Latest White Papers
- CRADLE: Conversational RTL Design Space Exploration with LLM-based Multi-Agent Systems
- On the Thermal Vulnerability of 3D-Stacked High-Bandwidth Memory Architectures
- OmniSim: Simulating Hardware with C Speed and RTL Accuracy for High-Level Synthesis Designs
- Balancing Power and Performance With Task Dependencies in Multi-Core Systems
- LLM Inference with Codebook-based Q4X Quantization using the Llama.cpp Framework on RISC-V Vector CPUs