Make SoCs flexible with embedded FPGA
Geoff Tate, Flex Logix
EDN (November 30, 2016)
Systems designers have long sought to provide programmability and flexibility in their systems designs to meet varying customer needs and evolving standards. The two most common approaches - FPGAs and MPUs/MCUs - provide different kinds of capabilities and complement each other, but have typically been separate devices. Now, chips with both processors and embedded FPGA are becoming a design option.
With the growth of connectivity, information, and data, there is a growing need for new processing capabilities that can span from ultra-low power <$1 microcontrollers to very large networking chips. Moore’s Law has given rise to the availability of new SoCs and MCUs for existing and new markets with each one of these SoCs/MCUs designed specifically for the market segment it is being targeted. This widespread use of special-purpose architectures greatly increases the need for new designs, however, and the rise of new markets (e.g., IoT) and device types (e.g., sensors) for these new markets is growing faster than the SoC types available.
To read the full article, click here
Related Semiconductor IP
- eFPGA IP — Flexible Reconfigurable Logic Acceleration Core
- Heterogeneous eFPGA architecture with LUTs, DSPs, and BRAMs on GlobalFoundries GF12LP
- eFPGA on GlobalFoundries GF12LP
- eFPGA Hard IP Generator
- Radiation-Hardened eFPGA
Related Articles
- FPGA Prototyping of Complex SoCs: RTL code migration and debug strategies
- Designing embedded SoCs using older resistive technologies
- Embedded FPGA: Changing the Way Chips Are Designed
- The case for integrating FPGA fabrics with CPU architectures
Latest Articles
- RISC-V Functional Safety for Autonomous Automotive Systems: An Analytical Framework and Research Roadmap for ML-Assisted Certification
- Emulation-based System-on-Chip Security Verification: Challenges and Opportunities
- A 129FPS Full HD Real-Time Accelerator for 3D Gaussian Splatting
- SkipOPU: An FPGA-based Overlay Processor for Large Language Models with Dynamically Allocated Computation
- TensorPool: A 3D-Stacked 8.4TFLOPS/4.3W Many-Core Domain-Specific Processor for AI-Native Radio Access Networks