[2603.26671] Mitigating Forgetting in Continual Learning with Selective Gradient Projection
About this article
Abstract page for arXiv paper 2603.26671: Mitigating Forgetting in Continual Learning with Selective Gradient Projection
Computer Science > Machine Learning arXiv:2603.26671 (cs) [Submitted on 8 Feb 2026] Title:Mitigating Forgetting in Continual Learning with Selective Gradient Projection Authors:Anika Singh, Aayush Dhaulakhandi, Varun Chopade, Likhith Malipati, David Martinez, Kevin Zhu View a PDF of the paper titled Mitigating Forgetting in Continual Learning with Selective Gradient Projection, by Anika Singh and 5 other authors View PDF HTML (experimental) Abstract:As neural networks are increasingly deployed in dynamic environments, they face the challenge of catastrophic forgetting, the tendency to overwrite previously learned knowledge when adapting to new tasks, resulting in severe performance degradation on earlier tasks. We propose Selective Forgetting-Aware Optimization (SFAO), a dynamic method that regulates gradient directions via cosine similarity and per-layer gating, enabling controlled forgetting while balancing plasticity and stability. SFAO selectively projects, accepts, or discards updates using a tunable mechanism with efficient Monte Carlo approximation. Experiments on standard continual learning benchmarks show that SFAO achieves competitive accuracy with markedly lower memory cost, a 90$\%$ reduction, and improved forgetting on MNIST datasets, making it suitable for resource-constrained scenarios. Comments: Subjects: Machine Learning (cs.LG); Optimization and Control (math.OC) Cite as: arXiv:2603.26671 [cs.LG] (or arXiv:2603.26671v1 [cs.LG] for this version) https:...