[2604.04552] StableTTA: Training-Free Test-Time Adaptation that Improves Model Accuracy on ImageNet1K to 96%
About this article
Abstract page for arXiv paper 2604.04552: StableTTA: Training-Free Test-Time Adaptation that Improves Model Accuracy on ImageNet1K to 96%
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.04552 (cs) [Submitted on 6 Apr 2026] Title:StableTTA: Training-Free Test-Time Adaptation that Improves Model Accuracy on ImageNet1K to 96% Authors:Zheng Li, Jerry Cheng, Huanying Helen Gu View a PDF of the paper titled StableTTA: Training-Free Test-Time Adaptation that Improves Model Accuracy on ImageNet1K to 96%, by Zheng Li and 2 other authors View PDF HTML (experimental) Abstract:Ensemble methods are widely used to improve predictive performance, but their effectiveness often comes at the cost of increased memory usage and computational complexity. In this paper, we identify a conflict in aggregation strategies that negatively impacts prediction stability. We propose StableTTA, a training-free method to improve aggregation stability and efficiency. Empirical results on ImageNet-1K show gains of 10.93--32.82\% in top-1 accuracy, with 33 models achieving over 95\% accuracy and several surpassing 96\%. Notably, StableTTA allows lightweight architectures to outperform ViT by 11.75\% in top-1 accuracy while using less than 5\% of parameters and reducing computational cost by approximately 89.1\% (in GFLOPs), enabling high-accuracy inference on resource-constrained devices. Comments: Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) Cite as: arXiv:2604.04552 [cs.CV] (or arXiv:2604.04552v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2604.04552 Focus t...