In-Memory Computing Versus Data Center Networks
Ron Wilson, Intel FPGA
June 20, 2017
For data-center architects it seems like a no-brainer. For a wide variety of applications, from the databases behind e-commerce platforms to the big-data tools in search engines to suddenly-fashionable data analytics to scientific codes, the dominant limitation on application response time is storage latency. But DRAM keeps getting denser, and solid-state drives (SSDs) cheaper. And a new class of memory devices—storage-class memory (SCM)—promises to put enormous amounts of memory on server cards. So why not just make all the data for these problem applications memory-resident, and eliminate disk and even SSD latency altogether?
The notion fits well into the shifting needs of data-center workloads. Many are becoming more sensitive to user-level response time, as users show increasing willingness to abandon a search, an online purchase, or a content view after only a few seconds’ delay. And the emergence of real-time constrains, as machine-learning or data-analytic functions are included in control systems—notably for autonomous vehicles—puts an extra urgency on latency questions.
Related Semiconductor IP
- Video Tracking FPGA IP core for Xilinx and Altera
- Video Tracking FPGA IP core for Xilinx and Altera
- Video Tracking FPGA IP core for Xilinx and Altera
- NAND flash Controller using Altera PHY Lite
- Aurora-like 64b/66b @14Gbps for ALTERA Devices
Related White Papers
- An Analysis of Blocking versus Non-Blocking Flow Control in On-Chip Networks
- The role of IP in the new generation of data center SoCs
- FPGA-Based NVM Express Flash Storage Cards in the Data Center
- Flash Upends the Data Center
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference