[2509.07252] GCond: Gradient Conflict Resolution via Accumulation-based Stabilization for Large-Scale Multi-Task Learning

[2509.07252] GCond: Gradient Conflict Resolution via Accumulation-based Stabilization for Large-Scale Multi-Task Learning

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2509.07252: GCond: Gradient Conflict Resolution via Accumulation-based Stabilization for Large-Scale Multi-Task Learning

Computer Science > Machine Learning arXiv:2509.07252 (cs) [Submitted on 8 Sep 2025 (v1), last revised 1 Apr 2026 (this version, v2)] Title:GCond: Gradient Conflict Resolution via Accumulation-based Stabilization for Large-Scale Multi-Task Learning Authors:Evgeny Alves Limarenko, Anastasiia Studenikina, Svetlana Illarionova, Maxim Sharaev View a PDF of the paper titled GCond: Gradient Conflict Resolution via Accumulation-based Stabilization for Large-Scale Multi-Task Learning, by Evgeny Alves Limarenko and 3 other authors View PDF Abstract:In multi-task learning (MTL), gradient conflict poses a significant challenge. Effective methods for addressing this problem, including PCGrad, CAGrad, and GradNorm, in their original implementations are computationally demanding, which significantly limits their application in modern large models such as transformers. We propose Gradient Conductor (GCond), a method that builds upon PCGrad principles by combining them with gradient accumulation and an adaptive arbitration mechanism. We evaluated GCond on self-supervised multi-task learning tasks using MobileNetV3-Small and ConvNeXt architectures on the ImageNet 1K dataset and a combined head and neck CT scan dataset, comparing the proposed method against baseline linear combinations and state-of-the-art gradient conflict resolution methods. The classical and stochastic approaches of GCond were analyzed. The stochastic mode of GCond achieved a two-fold computational speedup while maintaini...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED
Machine Learning

5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED

The cyber capabilities of AI models have experts rattled. AI’s social skills may be just as dangerous.

Wired - AI · 8 min ·
Machine Learning

“AI engineers” today are just prompt engineers with better branding?

Hot take: A lot of what’s being called “AI engineering” right now feels like: prompt tweaking chaining APIs adding retries/guardrails Not...

Reddit - Artificial Intelligence · 1 min ·
Anthropic’s Mythos rollout has missed America’s cybersecurity agency | The Verge
Machine Learning

Anthropic’s Mythos rollout has missed America’s cybersecurity agency | The Verge

The Cybersecurity and Infrastructure Security Agency (CISA) doesn’t have access to Anthropic’s Mythos Preview, Axios reported.

The Verge - AI · 5 min ·
Machine Learning

How do you anonymize code for a conference submission? [D]

Hi everyone, I have a question about anonymizing code for conference submissions. I’m submitting an AI/ML paper to a conference and would...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime