[2603.20036] Continual Learning as Shared-Manifold Continuation Under Compatible Shift

[2603.20036] Continual Learning as Shared-Manifold Continuation Under Compatible Shift

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2603.20036: Continual Learning as Shared-Manifold Continuation Under Compatible Shift

Computer Science > Machine Learning arXiv:2603.20036 (cs) [Submitted on 20 Mar 2026] Title:Continual Learning as Shared-Manifold Continuation Under Compatible Shift Authors:Henry J. Kobs View a PDF of the paper titled Continual Learning as Shared-Manifold Continuation Under Compatible Shift, by Henry J. Kobs View PDF HTML (experimental) Abstract:Continual learning methods usually preserve old behavior by regularizing parameters, matching old outputs, or replaying previous examples. These strategies can reduce forgetting, but they do not directly specify how the latent representation should evolve. We study a narrower geometric alternative for the regime where old and new data should remain on the same latent support: continual learning as continuation of a shared manifold. We instantiate this view within Support-Preserving Manifold Assimilation (SPMA) and evaluate a geometry-preserving variant, SPMA-OG, that combines sparse replay, output distillation, relational geometry preservation, local smoothing, and chart-assignment regularization on old anchors. On representative compatible-shift CIFAR10 and Tiny-ImageNet runs, SPMA-OG improves over sparse replay baselines in old-task retention and representation-preservation metrics while remaining competitive on new-task accuracy. On a controlled synthetic atlas-manifold benchmark, it achieves near-perfect anchor-geometry preservation while also improving new-task accuracy over replay. These results provide evidence that geometry...

Originally published on March 23, 2026. Curated by AI News.

Related Articles

Ai Infrastructure

[D] thoughts on the controversy about Google's new paper?

Openreview: https://openreview.net/forum?id=tO3ASKZlok It's sad to see almost no one mention this on Reddit and people are being mean to ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] MXFP8 GEMM: Up to 99% of cuBLAS performance using CUDA + PTX

New blog post by Daniel Vega-Myhre (Meta/PyTorch) illustrating GEMM design for FP8, including deep-dives into all the constraints and des...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
[2603.15159] To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation
Llms

[2603.15159] To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

Abstract page for arXiv paper 2603.15159: To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

arXiv - AI · 4 min ·
More in Ai Infrastructure: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime