[2603.26190] Dual-Stage Invariant Continual Learning under Extreme Visual Sparsity
About this article
Abstract page for arXiv paper 2603.26190: Dual-Stage Invariant Continual Learning under Extreme Visual Sparsity
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.26190 (cs) [Submitted on 27 Mar 2026] Title:Dual-Stage Invariant Continual Learning under Extreme Visual Sparsity Authors:Rangya Zhang, Jiaping Xiao, Lu Bai, Yuhang Zhang, Mir Feroskhan View a PDF of the paper titled Dual-Stage Invariant Continual Learning under Extreme Visual Sparsity, by Rangya Zhang and 3 other authors View PDF HTML (experimental) Abstract:Continual learning seeks to maintain stable adaptation under non-stationary environments, yet this problem becomes particularly challenging in object detection, where most existing methods implicitly assume relatively balanced visual conditions. In extreme-sparsity regimes, such as those observed in space-based resident space object (RSO) detection scenarios, foreground signals are overwhelmingly dominated by background observations. Under such conditions, we analytically demonstrate that background-driven gradients destabilize the feature backbone during sequential domain shifts, causing progressive representation drift. This exposes a structural limitation of continual learning approaches relying solely on output-level distillation, as they fail to preserve intermediate representation stability. To address this, we propose a dual-stage invariant continual learning framework via joint distillation, enforcing structural and semantic consistency on both backbone representations and detection predictions, respectively, thereby suppressing error propag...