Pie: Pooling CPU Memory for LLM Inference
By Yi Xu, Ziming Mao, Xiangxi Mo, Shu Liu, Ion Stoica (UC Berkeley)
The rapid growth of LLMs has revolutionized natural language processing and AI analysis, but their increasing size and memory demands present significant challenges. A common solution is to spill over to CPU memory; however, traditional GPU-CPU memory swapping often results in higher latency and lower throughput.
This paper introduces Pie, an LLM inference framework that addresses these challenges with performance-transparent swapping and adaptive expansion. By leveraging predictable memory access patterns and the high bandwidth of modern hardware like the NVIDIA GH200 Grace Hopper Superchip, Pie enables concurrent data swapping without affecting foreground computation, expanding effective memory without added latency. Adaptive expansion dynamically adjusts CPU memory allocation based on real-time information, optimizing memory usage and performance under varying conditions.
Pie maintains low computation latency, high throughput, and high elasticity. Our experimental evaluation demonstrates that Pie achieves optimal swapping policy during cache warmup and effectively balances increased memory capacity with negligible impact on computation. With its extended capacity, Pie outperforms vLLM by up to 1.9X in throughput and 2X in latency. Additionally, Pie can reduce GPU memory usage by up to 1.67X while maintaining the same performance. Compared to FlexGen, an offline profiling-based swapping solution, Pie achieves magnitudes lower latency and 9.4X higher throughput.
To read the full article, click here
Related Semiconductor IP
- Multi-core capable 64-bit RISC-V CPU with vector extensions
- Highly configurable HW PQC acceleration with RISC-V processor for full CPU offload
- RISC-V CPU IP
- Data Movement Engine - Best in class multi-core high-performance AI-enabled RISC-V Automotive CPU for ADAS, AVs and SDVs
- Multi-core capable 32-bit RISC-V CPU with vector extensions
Related Articles
- Combating the Memory Walls: Optimization Pathways for Long-Context Agentic LLM Inference
- The Growing Importance of AI Inference and the Implications for Memory Technology
- Top 5 Reasons why CPU is the Best Processor for AI Inference
- SPAD: Specialized Prefill and Decode Hardware for Disaggregated LLM Inference
Latest Articles
- FPGA-Accelerated RISC-V ISA Extensions for Efficient Neural Network Inference on Edge Devices
- MultiVic: A Time-Predictable RISC-V Multi-Core Processor Optimized for Neural Network Inference
- AnaFlow: Agentic LLM-based Workflow for Reasoning-Driven Explainable and Sample-Efficient Analog Circuit Sizing
- FeNN-DMA: A RISC-V SoC for SNN acceleration
- Multimodal Chip Physical Design Engineer Assistant