[2602.18528] Audio-Visual Continual Test-Time Adaptation without Forgetting
Summary
The paper presents a novel method, AV-CTTA, for audio-visual continual test-time adaptation that minimizes catastrophic forgetting while improving model performance across non-stationary domains.
Why It Matters
This research addresses a significant challenge in machine learning where models struggle to adapt to changing data distributions without losing previously learned information. The proposed method enhances model robustness and efficiency, making it relevant for applications in dynamic environments.
Key Takeaways
- AV-CTTA adapts audio-visual models at test-time without catastrophic forgetting.
- The method focuses on optimizing the modality fusion layer for improved performance.
- Dynamic parameter retrieval enhances adaptability to new data distributions.
- Extensive experiments demonstrate significant performance improvements over existing methods.
- The approach is applicable to both unimodal and bimodal corruptions.
Computer Science > Machine Learning arXiv:2602.18528 (cs) [Submitted on 20 Feb 2026] Title:Audio-Visual Continual Test-Time Adaptation without Forgetting Authors:Sarthak Kumar Maharana, Akshay Mehra, Bhavya Ramakrishna, Yunhui Guo, Guan-Ming Su View a PDF of the paper titled Audio-Visual Continual Test-Time Adaptation without Forgetting, by Sarthak Kumar Maharana and 4 other authors View PDF HTML (experimental) Abstract:Audio-visual continual test-time adaptation involves continually adapting a source audio-visual model at test-time, to unlabeled non-stationary domains, where either or both modalities can be distributionally shifted, which hampers online cross-modal learning and eventually leads to poor accuracy. While previous works have tackled this problem, we find that SOTA methods suffer from catastrophic forgetting, where the model's performance drops well below the source model due to continual parameter updates at test-time. In this work, we first show that adapting only the modality fusion layer to a target domain not only improves performance on that domain but can also enhance performance on subsequent domains. Based on this strong cross-task transferability of the fusion layer's parameters, we propose a method, $\texttt{AV-CTTA}$, that improves test-time performance of the models without access to any source data. Our approach works by using a selective parameter retrieval mechanism that dynamically retrieves the best fusion layer parameters from a buffer using...