[2510.14581] Model-agnostic Selective Labeling with Provable Statistical Guarantees
Summary
The paper presents 'Conformal Labeling', a model-agnostic method that ensures high-quality AI-generated labels by controlling the false discovery rate, offering theoretical guarantees for label trustworthiness.
Why It Matters
High-quality labeling is crucial for training AI models, yet traditional methods often lead to errors. This research provides a statistically sound approach to enhance the reliability of AI-generated labels, making it significant for machine learning applications across various domains.
Key Takeaways
- Introduces 'Conformal Labeling' to improve AI label quality.
- Controls false discovery rate (FDR) for reliable AI predictions.
- Offers theoretical guarantees on the accuracy of AI-generated labels.
- Demonstrates effectiveness across tasks like image and text labeling.
- Addresses a critical issue in AI model training and deployment.
Computer Science > Machine Learning arXiv:2510.14581 (cs) [Submitted on 16 Oct 2025 (v1), last revised 14 Feb 2026 (this version, v3)] Title:Model-agnostic Selective Labeling with Provable Statistical Guarantees Authors:Huipeng Huang, Wenbo Liao, Huajun Xi, Hao Zeng, Mengchen Zhao, Hongxin Wei View a PDF of the paper titled Model-agnostic Selective Labeling with Provable Statistical Guarantees, by Huipeng Huang and 5 other authors View PDF HTML (experimental) Abstract:Obtaining high-quality labels for large datasets is expensive, requiring massive annotations from human experts. While AI models offer a cost-effective alternative by predicting labels, their label quality is compromised by the unavoidable labeling errors. Existing methods mitigate this issue through selective labeling, where AI labels a subset and human labels the remainder. However, these methods lack theoretical guarantees on the quality of AI-assigned labels, often resulting in unacceptably high labeling error within the AI-labeled subset. To address this, we introduce \textbf{Conformal Labeling}, a novel method to identify instances where AI predictions can be provably trusted. This is achieved by controlling the false discovery rate (FDR), the proportion of incorrect labels within the selected subset. In particular, we construct a conformal $p$-value for each test instance by comparing AI models' predicted confidence to those of calibration instances mislabeled by AI models. Then, we select test instanc...