[2603.02957] Leveraging Label Proportion Prior for Class-Imbalanced Semi-Supervised Learning
About this article
Abstract page for arXiv paper 2603.02957: Leveraging Label Proportion Prior for Class-Imbalanced Semi-Supervised Learning
Computer Science > Machine Learning arXiv:2603.02957 (cs) [Submitted on 3 Mar 2026] Title:Leveraging Label Proportion Prior for Class-Imbalanced Semi-Supervised Learning Authors:Kohki Akiba, Shinnosuke Matsuo, Shota Harada, Ryoma Bise View a PDF of the paper titled Leveraging Label Proportion Prior for Class-Imbalanced Semi-Supervised Learning, by Kohki Akiba and 3 other authors View PDF HTML (experimental) Abstract:Semi-supervised learning (SSL) often suffers under class imbalance, where pseudo-labeling amplifies majority bias and suppresses minority performance. We address this issue with a lightweight framework that, to our knowledge, is the first to introduce Proportion Loss from learning from label proportions (LLP) into SSL as a regularization term. Proportion Loss aligns model predictions with the global class distribution, mitigating bias across both majority and minority classes. To further stabilize training, we formulate a stochastic variant that accounts for fluctuations in mini-batch composition. Experiments on the Long-tailed CIFAR-10 benchmark show that integrating Proportion Loss into FixMatch and ReMixMatch consistently improves performance over the baselines across imbalance severities and label ratios, and achieves competitive or superior results compared to existing CISSL methods, particularly under scarce-label conditions. Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2603.02957 [cs.LG] (or arXiv:2...