[2510.03734] Cost Efficient Fairness Audit Under Partial Feedback

[2510.03734] Cost Efficient Fairness Audit Under Partial Feedback

arXiv - AI 4 min read Article

Summary

This paper presents a cost-efficient approach to auditing fairness in classifiers under partial feedback, proposing algorithms that outperform traditional methods in real-world scenarios.

Why It Matters

As AI systems increasingly influence critical decisions, ensuring their fairness is vital. This research addresses the challenge of auditing classifiers when only partial feedback is available, offering a practical solution that can reduce costs and improve fairness assessments in various applications, such as loan approvals.

Key Takeaways

  • Introduces a novel cost model for fairness audits reflecting real-world expenses.
  • Proposes algorithms that significantly reduce audit costs compared to existing methods.
  • Demonstrates strong empirical performance on real-world datasets, outperforming baselines by 50%.

Computer Science > Machine Learning arXiv:2510.03734 (cs) [Submitted on 4 Oct 2025 (v1), last revised 23 Feb 2026 (this version, v2)] Title:Cost Efficient Fairness Audit Under Partial Feedback Authors:Nirjhar Das, Mohit Sharma, Praharsh Nanavati, Kirankumar Shiragur, Amit Deshpande View a PDF of the paper titled Cost Efficient Fairness Audit Under Partial Feedback, by Nirjhar Das and 4 other authors View PDF HTML (experimental) Abstract:We study the problem of auditing the fairness of a given classifier under partial feedback, where true labels are available only for positively classified individuals, (e.g., loan repayment outcomes are observed only for approved applicants). We introduce a novel cost model for acquiring additional labeled data, designed to more accurately reflect real-world costs such as credit assessment, loan processing, and potential defaults. Our goal is to find optimal fairness audit algorithms that are more cost-effective than random exploration and natural baselines. In our work, we consider two audit settings: a black-box model with no assumptions on the data distribution, and a mixture model, where features and true labels follow a mixture of exponential family distributions. In the black-box setting, we propose a near-optimal auditing algorithm under mild assumptions and show that a natural baseline can be strictly suboptimal. In the mixture model setting, we design a novel algorithm that achieves significantly lower audit cost than the black-box...

Related Articles

Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Those of you with 10+ years in ML — what is the public completely wrong about?

For those of you who've been in ML/AI research or applied ML for 10+ years — what's the gap between what the public thinks AI is doing vs...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

AI assistants are optimized to seem helpful. That is not the same thing as being helpful.

RLHF trains models on human feedback. Humans rate responses they like. And it turns out humans consistently rate confident, fluent, agree...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime