Optimizing AI and Machine Learning with eFPGAs
By Cheng Wang, Flex Logix, Inc.
August 6th, 2018, eecatalog.com
Why the performance and flexibility offered by eFPGA is turning out to be a game changer for anyone designing AI and machine learning and struggling to meet the compute demands.
The market for artificial intelligence (AI) and machine learning applications has been growing substantially over the last several years. Designers have a tough row to hoe when it comes to satisfying these applications’ seemingly insatiable compute hunger. They are finding that traditional Von Neumann processor architectures are not optimal solutions for the neural networks fundamental to AI and machine learning.
When GPUs are used to train neural networks, they require floating pointing math that is very compute intensive. However, using integer math for inference, designers can speed computation by turning to FPGAs for neural network processing. Many companies are starting to recognize this, with Microsoft’s Project Brainwave, which uses FPGA chips to accelerate AI, as a perfect example.
To read the full article, click here
Related Semiconductor IP
- eFPGA
- eFPGA Hard IP Generator
- Radiation-Hardened eFPGA
- eFPGA IP as a synthesizable RTL core
- eFPGA IP - 100% third party standard cells
Related White Papers
- Accelerating SoC Evolution With NoC Innovations Using NoC Tiling for AI and Machine Learning
- Add Security And Supply Chain Trust To Your ASIC Or SoC With eFPGAs
- Artificial Intelligence and Machine Learning based Image Processing
- An overview of Machine Learning pipeline and its importance
Latest White Papers
- The backpropagation algorithm implemented on spiking neuromorphic hardware
- The Benefits of a Multi-Protocol PMA
- Integrating Ethernet, PCIe, And UCIe For Enhanced Bandwidth And Scalability For AI/HPC Chips
- eUSB2V2 with 4.8Gbps and Use Cases: A Comprehensive Overview
- Bringing SOT-MRAM Tech Closer to Cache Memory