[2512.10411] SWAA: Sliding Window Attention Adaptation for Efficient and Quality Preserving Long Context Processing
About this article
Abstract page for arXiv paper 2512.10411: SWAA: Sliding Window Attention Adaptation for Efficient and Quality Preserving Long Context Processing
Computer Science > Computation and Language arXiv:2512.10411 (cs) [Submitted on 11 Dec 2025 (v1), last revised 26 Mar 2026 (this version, v5)] Title:SWAA: Sliding Window Attention Adaptation for Efficient and Quality Preserving Long Context Processing Authors:Yijiong Yu, Jiale Liu, Qingyun Wu, Huazheng Wang, Ji Pei View a PDF of the paper titled SWAA: Sliding Window Attention Adaptation for Efficient and Quality Preserving Long Context Processing, by Yijiong Yu and 4 other authors View PDF HTML (experimental) Abstract:The quadratic complexity of self attention in Transformer based LLMs renders long context inference prohibitively expensive. While Sliding Window Attention (SWA), the simplest sparse attention pattern, offers a linear complexity alternative, it suffers from catastrophic long context performance collapse, which stems from two fundamental factors: the training inference mismatch when naively applying SWA to models pretrained with Full Attention (FA), and the inherent structural inability to access distant information when applying SWA to every module at all times. To address these dual challenges, we propose Sliding Window Attention Adaptation (SWAA), a plug and play toolkit of recipes that adapts FA models to SWA without costly pretraining. SWAA systematically combines four core strategies to tackle these distinct issues: (1) Full Attention (FA) Decode and (2) Interleaving FA and SWA layers, which mitigate structural defects by selectively allowing access to d...