[2512.14391] RePo: Language Models with Context Re-Positioning
About this article
Abstract page for arXiv paper 2512.14391: RePo: Language Models with Context Re-Positioning
Computer Science > Machine Learning arXiv:2512.14391 (cs) [Submitted on 16 Dec 2025 (v1), last revised 5 Mar 2026 (this version, v2)] Title:RePo: Language Models with Context Re-Positioning Authors:Huayang Li, Tianyu Zhao, Deng Cai, Richard Sproat View a PDF of the paper titled RePo: Language Models with Context Re-Positioning, by Huayang Li and 3 other authors View PDF HTML (experimental) Abstract:In-context learning is fundamental to modern Large Language Models (LLMs); however, prevailing architectures impose a rigid and fixed contextual structure by assigning linear or constant positional indices. Drawing on Cognitive Load Theory (CLT), we argue that this uninformative structure increases extraneous cognitive load, consuming finite working memory capacity that should be allocated to deep reasoning and attention allocation. To address this, we propose RePo, a novel mechanism that reduces extraneous load via context re-positioning. Unlike standard approaches, RePo utilizes a differentiable module, $f_\phi$, to assign token positions that capture contextual dependencies, rather than replying on pre-defined order. By continually pre-training on the OLMo-2 1B & 7B models, we demonstrate that RePo consistently enhances performance on tasks involving noisy contexts, structured data, and longer context length, while maintaining competitive performance on general short-context tasks. Detailed analysis reveals that RePo successfully allocate higher attention to distant but relev...