[2603.23658] Boost Like a (Var)Pro: Trust-Region Gradient Boosting via Variable Projection
About this article
Abstract page for arXiv paper 2603.23658: Boost Like a (Var)Pro: Trust-Region Gradient Boosting via Variable Projection
Computer Science > Machine Learning arXiv:2603.23658 (cs) [Submitted on 24 Mar 2026] Title:Boost Like a (Var)Pro: Trust-Region Gradient Boosting via Variable Projection Authors:Abhijit Chowdhary, Elizabeth Newman, Deepanshu Verma View a PDF of the paper titled Boost Like a (Var)Pro: Trust-Region Gradient Boosting via Variable Projection, by Abhijit Chowdhary and 2 other authors View PDF Abstract:Gradient boosting, a method of building additive ensembles from weak learners, has established itself as a practical and theoretically-motivated approach to approximate functions, especially using decision tree weak learners. Comparable methods for smooth parametric learners, such as neural networks, remain less developed in both training methodology and theory. To this end, we introduce \texttt{VPBoost} ({\bf V}ariable {\bf P}rojection {\bf Boost}ing), a gradient boosting algorithm for separable smooth approximators, i.e., models with a smooth nonlinear featurizer followed by a final linear mapping. \texttt{VPBoost} fuses variable projection, a training paradigm for separable models that enforces optimality of the linear weights, with a second-order weak learning strategy. The combination of second-order boosting, separable models, and variable projection give rise to a closed-form solution for the optimal linear weights and a natural interpretation of \VPBoost as a functional trust-region method. We thereby leverage trust-region theory to prove \VPBoost converges to a stationary ...