Combating the Memory Walls: Optimization Pathways for Long-Context Agentic LLM Inference
By Haoran Wu 1, Can Xiao 2, Jiayi Nie 1, Xuan Guo 1, Binglei Lou 2, Jeffrey T. H. Wong 2, Zhiwen Mo 2, Cheng Zhang 2, Przemyslaw Forys 2, Wayne Luk 2, Hongxiang Fan 2, Jianyi Cheng 3, Timothy M. Jones 1, Rika Antonova 1, Robert Mullins 1, Aaron Zhao 2
1 University of Cambridge
2 Imperial College London
3 University of Edinburgh
Abstract
LLMs now form the backbone of AI agents for a diverse array of applications, including tool use, command-line agents, and web or computer use agents. These agentic LLM inference tasks are fundamentally different from chatbot-focused inference -- they often have much larger context lengths to capture complex, prolonged inputs, such as entire webpage DOMs or complicated tool call trajectories. This, in turn, generates significant off-chip memory traffic for the underlying hardware at the inference stage and causes the workload to be constrained by two memory walls, namely the bandwidth and capacity memory walls, preventing the on-chip compute units from achieving high utilization.
In this paper, we introduce PLENA, a hardware-software co-designed system that applies three core optimization pathways to tackle these challenges. PLENA includes an efficient hardware implementation of compute and memory units supporting an asymmetric quantization scheme. PLENA also features a novel flattened systolic array architecture that has native support for FlashAttention to tackle these memory walls in the scenario of inference serving for long-context LLMs. Additionally, PLENA is developed with a complete stack, including a custom ISA, a compiler, a cycle-emulated simulator, and an automated design space exploration flow. The simulated results show that PLENA achieves up to 8.5x higher utilization than existing accelerators, and delivers 2.24x higher throughput than the A100 GPU and 3.85x higher throughput than the TPU v6e, under the same multiplier count and memory settings. The full PLENA system will also be open-sourced.
To read the full article, click here
Related Semiconductor IP
- USB 20Gbps Device Controller
- AGILEX 7 R-Tile Gen5 NVMe Host IP
- 100G PAM4 Serdes PHY - 14nm
- Bluetooth Low Energy Subsystem IP
- Multi-core capable 64-bit RISC-V CPU with vector extensions
Related White Papers
- Pie: Pooling CPU Memory for LLM Inference
- Partitioning to optimize AI inference for multi-core platforms
- Design-Stage Analysis, Verification, and Optimization for Every Designer
- The Growing Importance of AI Inference and the Implications for Memory Technology
Latest White Papers
- Combating the Memory Walls: Optimization Pathways for Long-Context Agentic LLM Inference
- Hardware Acceleration of Kolmogorov-Arnold Network (KAN) in Large-Scale Systems
- CRADLE: Conversational RTL Design Space Exploration with LLM-based Multi-Agent Systems
- On the Thermal Vulnerability of 3D-Stacked High-Bandwidth Memory Architectures
- OmniSim: Simulating Hardware with C Speed and RTL Accuracy for High-Level Synthesis Designs