[2602.18401] Leakage and Second-Order Dynamics Improve Hippocampal RNN Replay
Summary
This paper explores how leakage and second-order dynamics can enhance replay mechanisms in hippocampal recurrent neural networks (RNNs), proposing new models that improve exploration and efficiency in neural simulations.
Why It Matters
Understanding replay dynamics in neural networks is crucial for advancing AI models that mimic biological processes. This research provides insights into improving RNN performance, which can have implications for various applications in machine learning and cognitive modeling.
Key Takeaways
- The study introduces hidden state leakage to improve replay in RNNs.
- Negative feedback in hidden state adaptation enhances exploration but slows down replay.
- A new model for temporally compressed replay is proposed, linking it to underdamped Langevin sampling.
Computer Science > Machine Learning arXiv:2602.18401 (cs) [Submitted on 20 Feb 2026] Title:Leakage and Second-Order Dynamics Improve Hippocampal RNN Replay Authors:Josue Casco-Rodriguez, Nanda H. Krishna, Richard G. Baraniuk View a PDF of the paper titled Leakage and Second-Order Dynamics Improve Hippocampal RNN Replay, by Josue Casco-Rodriguez and 2 other authors View PDF Abstract:Biological neural networks (like the hippocampus) can internally generate "replay" resembling stimulus-driven activity. Recent computational models of replay use noisy recurrent neural networks (RNNs) trained to path-integrate. Replay in these networks has been described as Langevin sampling, but new modifiers of noisy RNN replay have surpassed this description. We re-examine noisy RNN replay as sampling to understand or improve it in three ways: (1) Under simple assumptions, we prove that the gradients replay activity should follow are time-varying and difficult to estimate, but readily motivate the use of hidden state leakage in RNNs for replay. (2) We confirm that hidden state adaptation (negative feedback) encourages exploration in replay, but show that it incurs non-Markov sampling that also slows replay. (3) We propose the first model of temporally compressed replay in noisy path-integrating RNNs through hidden state momentum, connect it to underdamped Langevin sampling, and show that, together with adaptation, it counters slowness while maintaining exploration. We verify our findings via ...