[2602.19655] Representation Stability in a Minimal Continual Learning Agent
Summary
This article explores the dynamics of representation stability in a minimal continual learning agent, highlighting its ability to maintain and adapt internal representations over time without complex architectures.
Why It Matters
Understanding representation stability is crucial for developing efficient continual learning systems, especially in environments where retraining is impractical. This research provides a foundational empirical baseline for future studies in this area, emphasizing the balance between stability and plasticity in learning agents.
Key Takeaways
- The study introduces a minimal continual learning agent that isolates representational dynamics.
- It quantifies representational change using cosine similarity and defines a stability metric.
- Experiments show a transition from plasticity to stability in response to consistent input.
- Semantic perturbations lead to temporary decreases in similarity, followed by recovery.
- The findings establish a baseline for studying representational accumulation in continual learning.
Computer Science > Machine Learning arXiv:2602.19655 (cs) [Submitted on 23 Feb 2026] Title:Representation Stability in a Minimal Continual Learning Agent Authors:Vishnu Subramanian View a PDF of the paper titled Representation Stability in a Minimal Continual Learning Agent, by Vishnu Subramanian View PDF HTML (experimental) Abstract:Continual learning systems are increasingly deployed in environments where retraining or reset is infeasible, yet many approaches emphasize task performance rather than the evolution of internal representations over time. In this work, we study a minimal continual learning agent designed to isolate representational dynamics from architectural complexity and optimization objectives. The agent maintains a persistent state vector across executions and incrementally updates it as new textual data is introduced. We quantify representational change using cosine similarity between successive normalized state vectors and define a stability metric over time intervals. Longitudinal experiments across eight executions reveal a transition from an initial plastic regime to a stable representational regime under consistent input. A deliberately introduced semantic perturbation produces a bounded decrease in similarity, followed by recovery and restabilization under subsequent coherent input. These results demonstrate that meaningful stability plasticity tradeoffs can emerge in a minimal, stateful learning system without explicit regularization, replay, or a...