[2602.16531] Transfer Learning of Linear Regression with Multiple Pretrained Models: Benefiting from More Pretrained Models via Overparameterization Debiasing
Summary
This paper explores transfer learning in linear regression using multiple pretrained models, highlighting the benefits of overparameterization and proposing a debiasing method to enhance learning outcomes.
Why It Matters
Understanding how to effectively utilize multiple pretrained models in transfer learning can significantly improve predictive performance in machine learning tasks. This research addresses the challenges posed by overparameterization bias, offering practical solutions that can be applied across various domains in AI and data science.
Key Takeaways
- Transfer learning can be enhanced by using multiple pretrained models.
- Overparameterization can lead to bias, affecting learning outcomes.
- A proposed debiasing method can mitigate the negative effects of overparameterization.
- The study provides empirical evaluations to support theoretical claims.
- Understanding the balance between model complexity and performance is crucial.
Computer Science > Machine Learning arXiv:2602.16531 (cs) [Submitted on 18 Feb 2026] Title:Transfer Learning of Linear Regression with Multiple Pretrained Models: Benefiting from More Pretrained Models via Overparameterization Debiasing Authors:Daniel Boharon, Yehuda Dar View a PDF of the paper titled Transfer Learning of Linear Regression with Multiple Pretrained Models: Benefiting from More Pretrained Models via Overparameterization Debiasing, by Daniel Boharon and Yehuda Dar View PDF Abstract:We study transfer learning for a linear regression task using several least-squares pretrained models that can be overparameterized. We formulate the target learning task as optimization that minimizes squared errors on the target dataset with penalty on the distance of the learned model from the pretrained models. We analytically formulate the test error of the learned target model and provide the corresponding empirical evaluations. Our results elucidate when using more pretrained models can improve transfer learning. Specifically, if the pretrained models are overparameterized, using sufficiently many of them is important for beneficial transfer learning. However, the learning may be compromised by overparameterization bias of pretrained models, i.e., the minimum $\ell_2$-norm solution's restriction to a small subspace spanned by the training examples in the high-dimensional parameter space. We propose a simple debiasing via multiplicative correction factor that can reduce the o...