[2603.01759] Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning
About this article
Abstract page for arXiv paper 2603.01759: Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning
Computer Science > Machine Learning arXiv:2603.01759 (cs) [Submitted on 2 Mar 2026] Title:Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning Authors:Zichen Tian, Yaoyao Liu, Qianru Sun View a PDF of the paper titled Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning, by Zichen Tian and 2 other authors View PDF HTML (experimental) Abstract:Training large foundation models from scratch for domain-specific applications is almost impossible due to data limits and long-tailed distributions -- taking remote sensing (RS) as an example. Fine-tuning natural image pre-trained models on RS images is a straightforward solution. To reduce computational costs and improve performance on tail classes, existing methods apply parameter-efficient fine-tuning (PEFT) techniques, such as LoRA and AdaptFormer. However, we observe that fixed hyperparameters -- such as intra-layer positions, layer depth, and scaling factors, can considerably hinder PEFT performance, as fine-tuning on RS images proves highly sensitive to these settings. To address this, we propose MetaPEFT, a method incorporating adaptive scalers that dynamically adjust module influence during fine-tuning. MetaPEFT dynamically adjusts three key factors of PEFT on RS images: module insertion, layer selection, and module-wise learning rates, which collectively control the influence of PEFT modules across the network. We conduct extensive experiments on three transfer-learning scenarios and five datasets ...