[2510.21271] Buffer layers for Test-Time Adaptation
About this article
Abstract page for arXiv paper 2510.21271: Buffer layers for Test-Time Adaptation
Computer Science > Machine Learning arXiv:2510.21271 (cs) [Submitted on 24 Oct 2025 (v1), last revised 21 Mar 2026 (this version, v3)] Title:Buffer layers for Test-Time Adaptation Authors:Hyeongyu Kim, Geonhui Han, Dosik Hwang View a PDF of the paper titled Buffer layers for Test-Time Adaptation, by Hyeongyu Kim and 2 other authors View PDF Abstract:In recent advancements in Test Time Adaptation (TTA), most existing methodologies focus on updating normalization layers to adapt to the test domain. However, the reliance on normalization-based adaptation presents key challenges. First, normalization layers such as Batch Normalization (BN) are highly sensitive to small batch sizes, leading to unstable and inaccurate statistics. Moreover, normalization-based adaptation is inherently constrained by the structure of the pre-trained model, as it relies on training-time statistics that may not generalize well to unseen domains. These issues limit the effectiveness of normalization-based TTA approaches, especially under significant domain shift. In this paper, we introduce a novel paradigm based on the concept of a Buffer layer, which addresses the fundamental limitations of normalization layer updates. Unlike existing methods that modify the core parameters of the model, our approach preserves the integrity of the pre-trained backbone, inherently mitigating the risk of catastrophic forgetting during online adaptation. Through comprehensive experimentation, we demonstrate that our a...