[2603.05226] Learning Optimal Individualized Decision Rules with Conditional Demographic Parity

[2603.05226] Learning Optimal Individualized Decision Rules with Conditional Demographic Parity

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2603.05226: Learning Optimal Individualized Decision Rules with Conditional Demographic Parity

Statistics > Machine Learning arXiv:2603.05226 (stat) [Submitted on 5 Mar 2026] Title:Learning Optimal Individualized Decision Rules with Conditional Demographic Parity Authors:Wenhai Cui, Wen Su, Donglin Zeng, Xingqiu Zhao View a PDF of the paper titled Learning Optimal Individualized Decision Rules with Conditional Demographic Parity, by Wenhai Cui and 2 other authors View PDF HTML (experimental) Abstract:Individualized decision rules (IDRs) have become increasingly prevalent in societal applications such as personalized marketing, healthcare, and public policy design. However, a critical ethical concern arises from the potential discriminatory effects of IDRs trained on biased data. These algorithms may disproportionately harm individuals from minority subgroups defined by sensitive attributes like gender, race, or language. To address this issue, we propose a novel framework that incorporates demographic parity (DP) and conditional demographic parity (CDP) constraints into the estimation of optimal IDRs. We show that the theoretically optimal IDRs under DP and CDP constraints can be obtained by applying perturbations to the unconstrained optimal IDRs, enabling a computationally efficient solution. Theoretically, we derive convergence rates for both policy value and the fairness constraint term. The effectiveness of our methods is illustrated through comprehensive simulation studies and an empirical application to the Oregon Health Insurance Experiment. Subjects: Machin...

Originally published on March 06, 2026. Curated by AI News.

Related Articles

Machine Learning

[D] Data curation and targeted replacement as a pre-training alignment and controllability method

Hi, r/MachineLearning: has much research been done in large-scale training scenarios where undesirable data has been replaced before trai...

Reddit - Machine Learning · 1 min ·
Ai Safety

I’ve come up with a new thought experiment to approach ASI, and it challenges the very notions of alignment and containment

I’ve written an essay exploring what I’m calling the Super-Intelligent Octopus Problem—a thought experiment designed to surface a paradox...

Reddit - Artificial Intelligence · 1 min ·
Ai Safety

Bias in AI: Examples and 6 Ways to Fix it in 2026

AI bias is an anomaly in the output of ML algorithms due to prejudiced assumptions. Explore types of AI bias, examples, how to reduce bia...

AI Events · 36 min ·
Llms

[R] I built a benchmark that catches LLMs breaking physics laws

I got tired of LLMs confidently giving wrong physics answers, so I built a benchmark that generates adversarial physics questions and gra...

Reddit - Machine Learning · 1 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime