[2504.02996] Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization

[2504.02996] Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization

arXiv - Machine Learning 4 min read Article

Summary

This article presents a novel approach to Noise-Aware Generalization (NAG) in machine learning, addressing the challenges posed by label noise and domain shifts. The proposed Domain Labels for Noise Detection (DL4ND) method demonstrates significant performance improvements acr...

Why It Matters

Understanding and improving model robustness in the presence of noise and domain shifts is crucial for real-world applications of machine learning. This research contributes to the field by proposing a new method that effectively combines techniques from Learning with Noisy Labels and Domain Generalization, potentially enhancing the reliability of AI systems.

Key Takeaways

  • Noise-Aware Generalization (NAG) combines challenges of label noise and domain shifts.
  • The proposed DL4ND method outperforms existing techniques in handling noisy labels and domain variations.
  • Performance improvements of up to 12.5% were observed across seven diverse datasets.
  • Existing methods often fail when label noise is present, highlighting the need for integrated approaches.
  • The research opens avenues for further exploration in robust machine learning methodologies.

Computer Science > Machine Learning arXiv:2504.02996 (cs) [Submitted on 3 Apr 2025 (v1), last revised 22 Feb 2026 (this version, v2)] Title:Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization Authors:Siqi Wang, Aoming Liu, Bryan A. Plummer View a PDF of the paper titled Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization, by Siqi Wang and 2 other authors View PDF HTML (experimental) Abstract:Methods addressing Learning with Noisy Labels (LNL) and multi-source Domain Generalization (DG) use training techniques to improve downstream task performance in the presence of label noise or domain shifts, respectively. Prior work often explores these tasks in isolation, and the limited work that does investigate their intersection, which we refer to as Noise-Aware Generalization (NAG), only benchmarks existing methods without also proposing an approach to reduce its effect. We find that this is likely due, in part, to the new challenges that arise when exploring NAG, which does not appear in LNL or DG alone. For example, we show that the effectiveness of DG methods is compromised in the presence of label noise, making them largely ineffective. Similarly, LNL methods often overfit to easy-to-learn domains as they confuse domain shifts for label noise. Instead, we propose Domain Labels for Noise Detection (DL4ND), the first direct method developed for NAG which uses our observation that noisy samples that ma...

Related Articles

Machine Learning

Your prompts aren’t the problem — something else is

I keep seeing people focus heavily on prompt optimization. But in practice, a lot of failures I’ve observed don’t come from the prompt it...

Reddit - Artificial Intelligence · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Those of you with 10+ years in ML — what is the public completely wrong about?

For those of you who've been in ML/AI research or applied ML for 10+ years — what's the gap between what the public thinks AI is doing vs...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime