[2603.22908] Dual-Teacher Distillation with Subnetwork Rectification for Black-Box Domain Adaptation
About this article
Abstract page for arXiv paper 2603.22908: Dual-Teacher Distillation with Subnetwork Rectification for Black-Box Domain Adaptation
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.22908 (cs) [Submitted on 24 Mar 2026] Title:Dual-Teacher Distillation with Subnetwork Rectification for Black-Box Domain Adaptation Authors:Zhe Zhang, Jing Li, Wanli Xue, Xu Cheng, Jianhua Zhang, Qinghua Hu, Shengyong Chen View a PDF of the paper titled Dual-Teacher Distillation with Subnetwork Rectification for Black-Box Domain Adaptation, by Zhe Zhang and 6 other authors View PDF HTML (experimental) Abstract:Assuming that neither source data nor the source model is accessible, black box domain adaptation represents a highly practical yet extremely challenging setting, as transferable information is restricted to the predictions of the black box source model, which can only be queried using target samples. Existing approaches attempt to extract transferable knowledge through pseudo label refinement or by leveraging external vision language models (ViLs), but they often suffer from noisy supervision or insufficient utilization of the semantic priors provided by ViLs, which ultimately hinder adaptation performance. To overcome these limitations, we propose a dual teacher distillation with subnetwork rectification (DDSR) model that jointly exploits the specific knowledge embedded in black box source models and the general semantic information of a ViL. DDSR adaptively integrates their complementary predictions to generate reliable pseudo labels for the target domain and introduces a subnetwork driven regul...