[2603.03920] BD-Merging: Bias-Aware Dynamic Model Merging with Evidence-Guided Contrastive Learning
About this article
Abstract page for arXiv paper 2603.03920: BD-Merging: Bias-Aware Dynamic Model Merging with Evidence-Guided Contrastive Learning
Computer Science > Machine Learning arXiv:2603.03920 (cs) [Submitted on 4 Mar 2026] Title:BD-Merging: Bias-Aware Dynamic Model Merging with Evidence-Guided Contrastive Learning Authors:Yuhan Xie, Chen Lyu View a PDF of the paper titled BD-Merging: Bias-Aware Dynamic Model Merging with Evidence-Guided Contrastive Learning, by Yuhan Xie and 1 other authors View PDF HTML (experimental) Abstract:Model Merging (MM) has emerged as a scalable paradigm for multi-task learning (MTL), enabling multiple task-specific models to be integrated without revisiting the original training data. Despite recent progress, the reliability of MM under test-time distribution shift remains insufficiently understood. Most existing MM methods typically assume that test data are clean and distributionally aligned with both the training and auxiliary sources. However, this assumption rarely holds in practice, often resulting in biased predictions with degraded generalization. To address this issue, we present BD-Merging, a bias-aware unsupervised model merging framework that explicitly models uncertainty to achieve adaptive reliability under distribution shift. First, BD-Merging introduces a joint evidential head that learns uncertainty over a unified label space, capturing cross-task semantic dependencies in MM. Second, building upon this evidential foundation, we propose an Adjacency Discrepancy Score (ADS) that quantifies evidential alignment among neighboring samples. Third, guided by ADS, a discre...