[2604.00067] Temporal Memory for Resource-Constrained Agents: Continual Learning via Stochastic Compress-Add-Smooth
About this article
Abstract page for arXiv paper 2604.00067: Temporal Memory for Resource-Constrained Agents: Continual Learning via Stochastic Compress-Add-Smooth
Computer Science > Machine Learning arXiv:2604.00067 (cs) [Submitted on 31 Mar 2026] Title:Temporal Memory for Resource-Constrained Agents: Continual Learning via Stochastic Compress-Add-Smooth Authors:Michael Chertkov View a PDF of the paper titled Temporal Memory for Resource-Constrained Agents: Continual Learning via Stochastic Compress-Add-Smooth, by Michael Chertkov View PDF HTML (experimental) Abstract:An agent that operates sequentially must incorporate new experience without forgetting old experience, under a fixed memory budget. We propose a framework in which memory is not a parameter vector but a stochastic process: a Bridge Diffusion on a replay interval $[0,1]$, whose terminal marginal encodes the present and whose intermediate marginals encode the past. New experience is incorporated via a three-step \emph{Compress--Add--Smooth} (CAS) recursion. We test the framework on the class of models with marginal probability densities modeled via Gaussian mixtures of fixed number of components~$K$ in $d$ dimensions; temporal complexity is controlled by a fixed number~$L$ of piecewise-linear protocol segments whose nodes store Gaussian-mixture states. The entire recursion costs $O(LKd^2)$ flops per day -- no backpropagation, no stored data, no neural networks -- making it viable for controller-light hardware. Forgetting in this framework arises not from parameter interference but from lossy temporal compression: the re-approximation of a finer protocol by a coarser one ...