[2603.21928] The Golden Subspace: Where Efficiency Meets Generalization in Continual Test-Time Adaptation

[2603.21928] The Golden Subspace: Where Efficiency Meets Generalization in Continual Test-Time Adaptation

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.21928: The Golden Subspace: Where Efficiency Meets Generalization in Continual Test-Time Adaptation

Computer Science > Computer Vision and Pattern Recognition arXiv:2603.21928 (cs) [Submitted on 23 Mar 2026] Title:The Golden Subspace: Where Efficiency Meets Generalization in Continual Test-Time Adaptation Authors:Guannan Lai, Da-Wei Zhou, Zhenguo Li, Han-Jia Ye View a PDF of the paper titled The Golden Subspace: Where Efficiency Meets Generalization in Continual Test-Time Adaptation, by Guannan Lai and 3 other authors View PDF HTML (experimental) Abstract:Continual Test-Time Adaptation (CTTA) aims to enable models to adapt online to unlabeled data streams under distribution shift without accessing source data. Existing CTTA methods face an efficiency-generalization trade-off: updating more parameters improves adaptation but severely reduces online inference efficiency. An ideal solution is to achieve comparable adaptation with minimal feature updates; we call this minimal subspace the golden subspace. We prove its existence in a single-step adaptation setting and show that it coincides with the row space of the pretrained classifier. To enable online maintenance of this subspace, we introduce the sample-wise Average Gradient Outer Product (AGOP) as an efficient proxy for estimating the classifier weights without retraining. Building on these insights, we propose Guided Online Low-rank Directional adaptation (GOLD), which uses a lightweight adapter to project features onto the golden subspace and learns a compact scaling vector while the subspace is dynamically updated vi...

Originally published on March 24, 2026. Curated by AI News.

Related Articles

Llms

[P] I built an autonomous ML agent that runs experiments on tabular data indefinitely - inspired by Karpathy's AutoResearch

Inspired by Andrej Karpathy's AutoResearch, I built a system where Claude Code acts as an autonomous ML researcher on tabular binary clas...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Data curation and targeted replacement as a pre-training alignment and controllability method

Hi, r/MachineLearning: has much research been done in large-scale training scenarios where undesirable data has been replaced before trai...

Reddit - Machine Learning · 1 min ·
Llms

[R] BraiNN: An Experimental Neural Architecture with Working Memory, Relational Reasoning, and Adaptive Learning

BraiNN An Experimental Neural Architecture with Working Memory, Relational Reasoning, and Adaptive Learning BraiNN is a compact research‑...

Reddit - Machine Learning · 1 min ·
Machine Learning

[HIRING]Remote AI Training Jobs -Up to $1K/Week| Collaborators Wanted.USA

submitted by /u/nortonakenga [link] [comments]

Reddit - ML Jobs · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime