[2407.01111] Proximity Matters: Local Proximity Enhanced Balancing for Treatment Effect Estimation

[2407.01111] Proximity Matters: Local Proximity Enhanced Balancing for Treatment Effect Estimation

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2407.01111: Proximity Matters: Local Proximity Enhanced Balancing for Treatment Effect Estimation

Computer Science > Machine Learning arXiv:2407.01111 (cs) [Submitted on 1 Jul 2024 (v1), last revised 25 Mar 2026 (this version, v2)] Title:Proximity Matters: Local Proximity Enhanced Balancing for Treatment Effect Estimation Authors:Hao Wang, Zhichao Chen, Zhaoran Liu, Xu Chen, Haoxuan Li, Zhouchen Lin View a PDF of the paper titled Proximity Matters: Local Proximity Enhanced Balancing for Treatment Effect Estimation, by Hao Wang and 5 other authors View PDF HTML (experimental) Abstract:Heterogeneous treatment effect (HTE) estimation from observational data poses significant challenges due to treatment selection bias. Existing methods address this bias by minimizing distribution discrepancies between treatment groups in latent space, focusing on global alignment. However, the fruitful aspect of local proximity, where similar units exhibit similar outcomes, is often overlooked. In this study, we propose Proximity-enhanced CounterFactual Regression (CFR-Pro) to exploit proximity for enhancing representation balancing within the HTE estimation context. Specifically, we introduce a pair-wise proximity regularizer based on optimal transport to incorporate the local proximity in discrepancy calculation. However, the curse of dimensionality renders the proximity measure and discrepancy estimation ineffective -- exacerbated by limited data availability for HTE estimation. To handle this problem, we further develop an informative subspace projector, which trades off minimal distan...

Originally published on March 26, 2026. Curated by AI News.

Related Articles

Washington needs AI guardrails — now | Opinion
Ai Safety

Washington needs AI guardrails — now | Opinion

We need legislation that draws clear lines on what AI systems may and may not do on behalf of the United States government

AI Tools & Products · 3 min ·
[2601.12910] SciCoQA: Quality Assurance for Scientific Paper--Code Alignment
Ai Safety

[2601.12910] SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

Abstract page for arXiv paper 2601.12910: SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

arXiv - AI · 3 min ·
[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining
Machine Learning

[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining

Abstract page for arXiv paper 2509.21385: Debugging Concept Bottleneck Models through Removal and Retraining

arXiv - Machine Learning · 4 min ·
[2512.00804] Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval
Llms

[2512.00804] Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval

Abstract page for arXiv paper 2512.00804: Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval

arXiv - AI · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime