[2602.21454] When Learning Hurts: Fixed-Pole RNN for Real-Time Online Training
Summary
This paper explores the limitations of learning recurrent poles in RNNs for real-time online training, advocating for fixed-pole architectures that enhance performance with less data.
Why It Matters
As machine learning applications increasingly rely on real-time data, understanding the efficiency of training methods is crucial. This research highlights how fixed-pole RNNs can provide stable and effective solutions in data-constrained environments, potentially influencing future designs in neural network architectures.
Key Takeaways
- Learning recurrent poles in RNNs can complicate optimization and require more data.
- Fixed-pole architectures offer stable state representations with less training complexity.
- Empirical results show fixed-pole networks outperform traditional RNNs in real-time tasks.
Computer Science > Machine Learning arXiv:2602.21454 (cs) [Submitted on 25 Feb 2026] Title:When Learning Hurts: Fixed-Pole RNN for Real-Time Online Training Authors:Alexander Morgan, Ummay Sumaya Khan, Lingjia Liu, Lizhong Zheng View a PDF of the paper titled When Learning Hurts: Fixed-Pole RNN for Real-Time Online Training, by Alexander Morgan and 3 other authors View PDF HTML (experimental) Abstract:Recurrent neural networks (RNNs) can be interpreted as discrete-time state-space models, where the state evolution corresponds to an infinite-impulse-response (IIR) filtering operation governed by both feedforward weights and recurrent poles. While, in principle, all parameters including pole locations can be optimized via backpropagation through time (BPTT), such joint learning incurs substantial computational overhead and is often impractical for applications with limited training data. Echo state networks (ESNs) mitigate this limitation by fixing the recurrent dynamics and training only a linear readout, enabling efficient and stable online adaptation. In this work, we analytically and empirically examine why learning recurrent poles does not provide tangible benefits in data-constrained, real-time learning scenarios. Our analysis shows that pole learning renders the weight optimization problem highly non-convex, requiring significantly more training samples and iterations for gradient-based methods to converge to meaningful solutions. Empirically, we observe that for comp...