[2602.18866] Boosting for Vector-Valued Prediction and Conditional Density Estimation
Summary
This paper explores boosting techniques for vector-valued prediction and conditional density estimation, addressing theoretical gaps in aggregation beyond scalar losses.
Why It Matters
Understanding boosting in vector-valued contexts enhances predictive modeling capabilities in machine learning. This research provides a theoretical foundation for improving algorithms, which can lead to better performance in various applications, including structured prediction tasks.
Key Takeaways
- Introduces $(eta,eta)$-boostability as a stability condition for aggregation.
- Demonstrates that geometric median aggregation can achieve boostability across various divergences.
- Proposes a new boosting framework, GeoMedBoost, which generalizes existing algorithms.
- Characterizes boostability under common divergences, revealing key distinctions in performance.
- Provides insights into handling KL divergence indirectly through Hellinger distance.
Computer Science > Machine Learning arXiv:2602.18866 (cs) [Submitted on 21 Feb 2026] Title:Boosting for Vector-Valued Prediction and Conditional Density Estimation Authors:Jian Qian, Shu Ge View a PDF of the paper titled Boosting for Vector-Valued Prediction and Conditional Density Estimation, by Jian Qian and Shu Ge View PDF HTML (experimental) Abstract:Despite the widespread use of boosting in structured prediction, a general theoretical understanding of aggregation beyond scalar losses remains incomplete. We study vector-valued and conditional density prediction under general divergences and identify stability conditions under which aggregation amplifies weak guarantees into strong ones. We formalize this stability property as \emph{$(\alpha,\beta)$-boostability}. We show that geometric median aggregation achieves $(\alpha,\beta)$-boostability for a broad class of divergences, with tradeoffs that depend on the underlying geometry. For vector-valued prediction and conditional density estimation, we characterize boostability under common divergences ($\ell_1$, $\ell_2$, $\TV$, and $\Hel$) with geometric median, revealing a sharp distinction between dimension-dependent and dimension-free regimes. We further show that while KL divergence is not directly boostable via geometric median aggregation, it can be handled indirectly through boostability under Hellinger distance. Building on these structural results, we propose a generic boosting framework \textsc{GeoMedBoost} based...