[2603.00478] Benchmarking Few-shot Transferability of Pre-trained Models with Improved Evaluation Protocols
About this article
Abstract page for arXiv paper 2603.00478: Benchmarking Few-shot Transferability of Pre-trained Models with Improved Evaluation Protocols
Computer Science > Machine Learning arXiv:2603.00478 (cs) [Submitted on 28 Feb 2026] Title:Benchmarking Few-shot Transferability of Pre-trained Models with Improved Evaluation Protocols Authors:Xu Luo, Ji Zhang, Lianli Gao, Heng Tao Shen, Jingkuan Song View a PDF of the paper titled Benchmarking Few-shot Transferability of Pre-trained Models with Improved Evaluation Protocols, by Xu Luo and 3 other authors View PDF HTML (experimental) Abstract:Few-shot transfer has been revolutionized by stronger pre-trained models and improved adaptation this http URL, there lacks a unified, rigorous evaluation protocol that is both challenging and realistic for real-world usage. In this work, we establish FEWTRANS, a comprehensive benchmark containing 10 diverse datasets, and propose the Hyperparameter Ensemble (HPE) protocol to overcome the "validation set illusion" in data-scarce regimes. Our empirical findings demonstrate that the choice of pre-trained model is the dominant factor for performance, while many sophisticated transfer methods offer negligible practical advantages over a simple full-parameter fine-tuning baseline. To explain this surprising effectiveness, we provide an in-depth mechanistic analysis showing that full fine-tuning succeeds via distributed micro-adjustments and more flexible reshaping of high-level semantic presentations without suffering from overfitting. Additionally, we quantify the performance collapse of multimodal models in specialized domains as a resul...