[2602.14231] Robust multi-task boosting using clustering and local ensembling
Summary
The paper presents Robust Multi-Task Boosting using Clustering and Local Ensembling (RMB-CLE), a framework that enhances multi-task learning by adaptively clustering tasks based on error metrics, thereby preventing negative transfer and improving predictive performance.
Why It Matters
This research addresses a significant challenge in multi-task learning: the risk of negative transfer when unrelated tasks share information. By introducing a method that clusters tasks based on their performance errors, RMB-CLE provides a theoretically grounded and scalable solution that can enhance the effectiveness of machine learning models across various applications.
Key Takeaways
- RMB-CLE integrates error-based clustering with local ensembling for robust multi-task learning.
- The framework adapts task clusters dynamically, improving knowledge sharing while preserving task-specific patterns.
- Experiments demonstrate RMB-CLE's superior performance compared to traditional multi-task and ensemble methods.
Computer Science > Machine Learning arXiv:2602.14231 (cs) [Submitted on 15 Feb 2026] Title:Robust multi-task boosting using clustering and local ensembling Authors:Seyedsaman Emami, Daniel Hernández-Lobato, Gonzalo Martínez-Muñoz View a PDF of the paper titled Robust multi-task boosting using clustering and local ensembling, by Seyedsaman Emami and 2 other authors View PDF HTML (experimental) Abstract:Multi-Task Learning (MTL) aims to boost predictive performance by sharing information across related tasks, yet conventional methods often suffer from negative transfer when unrelated or noisy tasks are forced to share representations. We propose Robust Multi-Task Boosting using Clustering and Local Ensembling (RMB-CLE), a principled MTL framework that integrates error-based task clustering with local ensembling. Unlike prior work that assumes fixed clusters or hand-crafted similarity metrics, RMB-CLE derives inter-task similarity directly from cross-task errors, which admit a risk decomposition into functional mismatch and irreducible noise, providing a theoretically grounded mechanism to prevent negative transfer. Tasks are grouped adaptively via agglomerative clustering, and within each cluster, a local ensemble enables robust knowledge sharing while preserving task-specific patterns. Experiments show that RMB-CLE recovers ground-truth clusters in synthetic data and consistently outperforms multi-task, single-task, and pooling-based ensemble methods across diverse real-wor...