Industry Leading PPA DSP Available For All Existing EFLX eFPGA
InferX is ~30 times the DSP performance/mm2 than eFPGA
MOUNTAIN VIEW, Calif. – March 6th, 2024 – Flex Logix® Technologies, Inc., the leading supplier of embedded FPGA (eFPGA) IP and reconfigurable DSP/AI solutions, announced today that InferX DSP is in development for use with existing EFLX eFPGA from 40nm to 7nm.
“Many customers use embedded FPGA for high performance signal processing and want more throughput from less silicon area,” said Geoff Tate, CEO of Flex Logix. “InferX is a tensor-processor with 128 INT16 MACs and a coefficient matrix which is delivered as Soft IP. The TPU packs ~15 more MACs/mm2 than eFPGA and can process at about double the frequency. One, two, four, eight or 16 TPUs can be controlled by existing EFLX eFPGA from 40nm to 7nm. Flex Logix writes the “soft logic” (Verilog) code for all of the DSP operators and the customer programs at a high level using Simulink and the InferX Compiler, which are in development now for a major customer. The InferX solution is reconfigurable at boot time or during execution.”
“InferX with existing EFLX eFPGA allows us to deliver much higher DSP throughput at much lower cost than eFPGA with MACs. For example, 16 TPUs with 12nm eFPGA can do Complex INT16 FFT dynamically switching FFT sizes from 128 to 8K at 2 Gigasamples/second (sample=INT16). Accuracy is very high because all accumulation is done at INT40,” said Cheng Wang, CTO of Flex Logix. “Even ultra low power 40nm can get significant throughput running 4K point complex FFT at 6 Megasamples/second.” Benchmarks are available for all of the process nodes we have silicon for now: 40, 28, 22, 16, 12, 7nm.
Learn more at https://flex-logix.com/inferx-dsp/
InferX hardware can also run AI workloads.
About Flex Logix
Flex Logix is a reconfigurable computing company providing leading edge eFPGA and AI Inference technologies for semiconductor and systems companies. Flex Logix eFPGA enables volume FPGA users to integrate the FPGA into their companion SoC, resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, microcontrollers and others. Its scalable AI inference is the most efficient, providing much higher inference throughput per square millimeter and per watt. Flex Logix supports process nodes from 180nm to 7nm, with 5nm, 3nm and 18A in development. Flex Logix is headquartered in Mountain View, California and has an office in Austin, Texas. For more information, visit https://flex-logix.com.
For general information on InferX and EFLX product lines, visit our website at this link.
Related Semiconductor IP
- eFPGA
- Radiation-Hardened eFPGA
- eFPGA IP as a synthesizable RTL core
- eFPGA IP and FPGA Software Built on GLOBALFOUNDRIES 22FDX
- eFPGA IP and FPGA Software Built on Samsung Foundry 28nm FDSOI
Related News
- Flex Logix Validates EFLX 4K eFPGA IP Core on TSMC16FFC; Evaluation Boards Available Now
- Flex Logix EFLX 1K eFPGA Core Design Kits Available Now For TSMC 40nm ULP And 40nm LP Process Technologies
- Flex Logix Announces EFLX eFPGA Emulation Models For The Cadence Palladium Z1 Platform
- Flex Logix Announces EFLX eFPGA And nnMAX AI Inference IP Model Support For The Veloce Strato Emulation Platform From Mentor
Latest News
- HPC customer engages Sondrel for high end chip design
- PCI-SIG’s Al Yanes on PCIe 7.0, HPC, and the Future of Interconnects
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- Cadence Unveils Arm-Based System Chiplet
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers