[2602.12649] Multi-Task Learning with Additive U-Net for Image Denoising and Classification

[2602.12649] Multi-Task Learning with Additive U-Net for Image Denoising and Classification

arXiv - Machine Learning 3 min read Article

Summary

This article presents the Additive U-Net architecture for image denoising and classification, highlighting its advantages in multi-task learning through controlled information flow and improved training stability.

Why It Matters

The research addresses challenges in multi-task learning by introducing a novel U-Net variant that enhances performance in both image denoising and classification tasks. This approach is significant for advancing computer vision applications, particularly in scenarios where model efficiency and stability are crucial.

Key Takeaways

  • Additive U-Net improves image denoising and classification through gated additive fusion.
  • The architecture stabilizes joint optimization and enhances training performance.
  • Shallow skips in the model prioritize reconstruction, while deeper features aid in classification.
  • The approach allows for effective task decoupling without increasing model complexity.
  • Controlled constraints on skip connections serve as a regularizer for multi-task learning.

Computer Science > Computer Vision and Pattern Recognition arXiv:2602.12649 (cs) [Submitted on 13 Feb 2026] Title:Multi-Task Learning with Additive U-Net for Image Denoising and Classification Authors:Vikram Lakkavalli, Neelam Sinha View a PDF of the paper titled Multi-Task Learning with Additive U-Net for Image Denoising and Classification, by Vikram Lakkavalli and Neelam Sinha View PDF HTML (experimental) Abstract:We investigate additive skip fusion in U-Net architectures for image denoising and denoising-centric multi-task learning (MTL). By replacing concatenative skips with gated additive fusion, the proposed Additive U-Net (AddUNet) constrains shortcut capacity while preserving fixed feature dimensionality across depth. This structural regularization induces controlled encoder-decoder information flow and stabilizes joint optimization. Across single-task denoising and joint denoising-classification settings, AddUNet achieves competitive reconstruction performance with improved training stability. In MTL, learned skip weights exhibit systematic task-aware redistribution: shallow skips favor reconstruction, while deeper features support discrimination. Notably, reconstruction remains robust even under limited classification capacity, indicating implicit task decoupling through additive fusion. These findings show that simple constraints on skip connections act as an effective architectural regularizer for stable and scalable multi-task learning without increasing model...

Related Articles

OpenAI, not yet public, raises $3B from retail investors in monster $122B fund raise | TechCrunch
Ai Infrastructure

OpenAI, not yet public, raises $3B from retail investors in monster $122B fund raise | TechCrunch

OpenAI's latest funding round, led by Amazon, Nvidia, and SoftBank, values the AI lab at $852 billion as it nears an IPO.

TechCrunch - AI · 4 min ·
Machine Learning

[R] Fine-tuning services report

If you have some data and want to train or run a small custom model but don't have powerful enough hardware for training, fine-tuning ser...

Reddit - Machine Learning · 1 min ·
Machine Learning

The AI Chip War is Just Getting Started

Everyone talks about AI models, but the real bottleneck might be hardware. According to a recent study by Roots Analysis: AI chip market ...

Reddit - Artificial Intelligence · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
More in Ai Infrastructure: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime