The Four Characteristics of an Optimal Inferencing Engine
By Geoff Tate, Flex Logix
EETimes - January 29, 2019
Advice on how to compare inferencing alternatives and the characteristics of an optimal inferencing engine.
In the last six months, we’ve seen an influx of specialized processors to handle neural inferencing in AI applications at the edge and in the data center. Customers have been racing to evaluate these neural inferencing options, only to find out that it’s extremely confusing and no one really knows how to measure them. Some vendors talk about TOPS and TOPS/Watt without specifying models, batch sizes or process/voltage/temperature conditions. Others use the ResNet-50 benchmark, which is a much simpler model than most people need so its value in evaluating inference options is questionable.
As a result, as we head into 2019, most companies don’t know how to compare inferencing alternatives. Many don’t even know what the characteristics of an optimal inferencing engine are. This article will address both those points.
To read the full article, click here
Related Semiconductor IP
- eFPGA
- eFPGA on GlobalFoundries GF12LP
- Heterogeneous eFPGA architecture with LUTs, DSPs, and BRAMs on GlobalFoundries GF12LP
- eFPGA Soft IP
- Radiation-Hardened eFPGA
Related White Papers
- An 800 Mpixels/s, ~260 LUTs Implementation of the QOI Lossless Image Compression Algorithm and its Improvement through Hilbert Scanning
- An Industrial Overview of Open Standards for Embedded Vision and Inferencing
- An Outline of the Semiconductor Chip Design Flow
- The Growing Imperative Of Hardware Security Assurance In IP And SoC Design
Latest White Papers
- Attack on a PUF-based Secure Binary Neural Network
- BBOPlace-Bench: Benchmarking Black-Box Optimization for Chip Placement
- FD-SOI: A Cyber-Resilient Substrate Against Laser Fault Injection—The Future Platform for Secure Automotive Electronics
- In-DRAM True Random Number Generation Using Simultaneous Multiple-Row Activation: An Experimental Study of Real DRAM Chips
- SPAD: Specialized Prefill and Decode Hardware for Disaggregated LLM Inference