[2602.14231] Robust multi-task boosting using clustering and local ensembling

[2602.14231] Robust multi-task boosting using clustering and local ensembling

arXiv - Machine Learning 3 min read Article

Summary

The paper presents Robust Multi-Task Boosting using Clustering and Local Ensembling (RMB-CLE), a framework that enhances multi-task learning by adaptively clustering tasks based on error metrics, thereby preventing negative transfer and improving predictive performance.

Why It Matters

This research addresses a significant challenge in multi-task learning: the risk of negative transfer when unrelated tasks share information. By introducing a method that clusters tasks based on their performance errors, RMB-CLE provides a theoretically grounded and scalable solution that can enhance the effectiveness of machine learning models across various applications.

Key Takeaways

  • RMB-CLE integrates error-based clustering with local ensembling for robust multi-task learning.
  • The framework adapts task clusters dynamically, improving knowledge sharing while preserving task-specific patterns.
  • Experiments demonstrate RMB-CLE's superior performance compared to traditional multi-task and ensemble methods.

Computer Science > Machine Learning arXiv:2602.14231 (cs) [Submitted on 15 Feb 2026] Title:Robust multi-task boosting using clustering and local ensembling Authors:Seyedsaman Emami, Daniel Hernández-Lobato, Gonzalo Martínez-Muñoz View a PDF of the paper titled Robust multi-task boosting using clustering and local ensembling, by Seyedsaman Emami and 2 other authors View PDF HTML (experimental) Abstract:Multi-Task Learning (MTL) aims to boost predictive performance by sharing information across related tasks, yet conventional methods often suffer from negative transfer when unrelated or noisy tasks are forced to share representations. We propose Robust Multi-Task Boosting using Clustering and Local Ensembling (RMB-CLE), a principled MTL framework that integrates error-based task clustering with local ensembling. Unlike prior work that assumes fixed clusters or hand-crafted similarity metrics, RMB-CLE derives inter-task similarity directly from cross-task errors, which admit a risk decomposition into functional mismatch and irreducible noise, providing a theoretically grounded mechanism to prevent negative transfer. Tasks are grouped adaptively via agglomerative clustering, and within each cluster, a local ensemble enables robust knowledge sharing while preserving task-specific patterns. Experiments show that RMB-CLE recovers ground-truth clusters in synthetic data and consistently outperforms multi-task, single-task, and pooling-based ensemble methods across diverse real-wor...

Related Articles

Machine Learning

Fed Chair Jerome Powell, Treasury's Bessent and top bank CEOs met over Anthropic's Mythos model

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think | WIRED
Machine Learning

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think | WIRED

The new AI model is being heralded—and feared—as a hacker’s superweapon. Experts say its arrival is a wake-up call for developers who hav...

Wired - AI · 9 min ·
Machine Learning

Is google deepmind known to ghost applicants? [D]

Hey sub, I'm sorry if this is a wrong place to ask but I don't see a sub for ML roles separately. I was wondering if deepmind is known to...

Reddit - Machine Learning · 1 min ·
Llms

OpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought ab...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime