[2601.21315] Distributionally Robust Classification for Multi-source Unsupervised Domain Adaptation
Summary
This paper presents a novel distributionally robust learning framework for multi-source unsupervised domain adaptation, addressing challenges in scenarios with limited target data.
Why It Matters
As machine learning models increasingly rely on domain adaptation techniques, this research is significant for improving model performance in real-world applications where labeled data is scarce. The proposed method enhances robustness against distribution shifts, making it relevant for various industries leveraging AI.
Key Takeaways
- Introduces a robust learning framework for unsupervised domain adaptation.
- Addresses challenges of limited unlabeled target data and spurious correlations.
- Demonstrates superior performance over existing methods in various scenarios.
Computer Science > Machine Learning arXiv:2601.21315 (cs) [Submitted on 29 Jan 2026 (v1), last revised 23 Feb 2026 (this version, v2)] Title:Distributionally Robust Classification for Multi-source Unsupervised Domain Adaptation Authors:Seonghwi Kim, Sung Ho Jo, Wooseok Ha, Minwoo Chae View a PDF of the paper titled Distributionally Robust Classification for Multi-source Unsupervised Domain Adaptation, by Seonghwi Kim and 3 other authors View PDF HTML (experimental) Abstract:Unsupervised domain adaptation (UDA) is a statistical learning problem when the distribution of training (source) data is different from that of test (target) data. In this setting, one has access to labeled data only from the source domain and unlabeled data from the target domain. The central objective is to leverage the source data and the unlabeled target data to build models that generalize to the target domain. Despite its potential, existing UDA approaches often struggle in practice, particularly in scenarios where the target domain offers only limited unlabeled data or spurious correlations dominate the source domain. To address these challenges, we propose a novel distributionally robust learning framework that models uncertainty in both the covariate distribution and the conditional label distribution. Our approach is motivated by the multi-source domain adaptation setting but is also directly applicable to the single-source scenario, making it versatile in practice. We develop an efficient le...