[2603.23571] StateLinFormer: Stateful Training Enhancing Long-term Memory in Navigation
About this article
Abstract page for arXiv paper 2603.23571: StateLinFormer: Stateful Training Enhancing Long-term Memory in Navigation
Computer Science > Machine Learning arXiv:2603.23571 (cs) [Submitted on 24 Mar 2026] Title:StateLinFormer: Stateful Training Enhancing Long-term Memory in Navigation Authors:Zhiyuan Chen, Yuxuan Zhong, Fan Wang, Bo Yu, Pengtao Shao, Shaoshan Liu, Ning Ding View a PDF of the paper titled StateLinFormer: Stateful Training Enhancing Long-term Memory in Navigation, by Zhiyuan Chen and 6 other authors View PDF HTML (experimental) Abstract:Effective navigation intelligence relies on long-term memory to support both immediate generalization and sustained adaptation. However, existing approaches face a dilemma: modular systems rely on explicit mapping but lack flexibility, while Transformer-based end-to-end models are constrained by fixed context windows, limiting persistent memory across extended interactions. We introduce StateLinFormer, a linear-attention navigation model trained with a stateful memory mechanism that preserves recurrent memory states across consecutive training segments instead of reinitializing them at each batch boundary. This training paradigm effectively approximates learning on infinitely long sequences, enabling the model to achieve long-horizon memory retention. Experiments across both MAZE and ProcTHOR environments demonstrate that StateLinFormer significantly outperforms its stateless linear-attention counterpart and standard Transformer baselines with fixed context windows. Notably, as interaction length increases, persistent stateful training substan...