[2603.29002] Understand and Accelerate Memory Processing Pipeline for Disaggregated LLM Inference
About this article
Abstract page for arXiv paper 2603.29002: Understand and Accelerate Memory Processing Pipeline for Disaggregated LLM Inference
Computer Science > Distributed, Parallel, and Cluster Computing arXiv:2603.29002 (cs) [Submitted on 30 Mar 2026] Title:Understand and Accelerate Memory Processing Pipeline for Disaggregated LLM Inference Authors:Zifan He, Rui Ma, Yizhou Sun, Jason Cong View a PDF of the paper titled Understand and Accelerate Memory Processing Pipeline for Disaggregated LLM Inference, by Zifan He and 3 other authors View PDF HTML (experimental) Abstract:Modern large language models (LLMs) increasingly depends on efficient long-context processing and generation mechanisms, including sparse attention, retrieval-augmented generation (RAG), and compressed contextual memory, to support complex reasoning. We show that these optimizations can be unified into a four-step memory processing pipeline: Prepare Memory, Compute Relevancy, Retrieval, and Apply to Inference. Through systematic profiling, we identify a 22%-97% memory processing overhead in LLM inference and strong heterogeneity in its computational characteristics. Motivated by this insight, we argue that \textbf{heterogeneous systems} are well-suited to accelerate memory processing and thus end-to-end inference. We demonstrate this approach on a GPU-FPGA system by offloading sparse, irregular, and memory-bounded operations to FPGAs while retaining compute-intensive operations on GPUs. Evaluated on an AMD MI210 GPU and an Alveo U55C FPGA, our system is $1.04\sim2.2\times$ faster and requires $1.11\sim4.7\times$ less energy across multiple L...