[2603.25083] Learning domain-invariant features through channel-level sparsification for Out-Of Distribution Generalization
About this article
Abstract page for arXiv paper 2603.25083: Learning domain-invariant features through channel-level sparsification for Out-Of Distribution Generalization
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.25083 (cs) [Submitted on 26 Mar 2026] Title:Learning domain-invariant features through channel-level sparsification for Out-Of Distribution Generalization Authors:Haoran Pei, Yuguang Yang, Kexin Liu, Juan Zhang, Baochang Zhang View a PDF of the paper titled Learning domain-invariant features through channel-level sparsification for Out-Of Distribution Generalization, by Haoran Pei and 4 other authors View PDF HTML (experimental) Abstract:Out-of-Distribution (OOD) generalization has become a primary metric for evaluating image analysis systems. Since deep learning models tend to capture domain-specific context, they often develop shortcut dependencies on these non-causal features, leading to inconsistent performance across different data sources. Current techniques, such as invariance learning, attempt to mitigate this. However, they struggle to isolate highly mixed features within deep latent spaces. This limitation prevents them from fully resolving the shortcut learning this http URL this paper, we propose Hierarchical Causal Dropout (HCD), a method that uses channel-level causal masks to enforce feature sparsity. This approach allows the model to separate causal features from spurious ones, effectively performing a causal intervention at the representation level. The training is guided by a Matrix-based Mutual Information (MMI) objective to minimize the mutual information between latent features and d...