[2602.18762] Bounds and Identification of Joint Probabilities of Potential Outcomes and Observed Variables under Monotonicity Assumptions
Summary
This paper explores the bounding and identification of joint probabilities of potential outcomes and observed variables under monotonicity assumptions, providing new methodologies and numerical validation.
Why It Matters
Understanding joint probabilities is crucial in causal inference, particularly in fields like statistics and machine learning. This research introduces innovative monotonicity assumptions and linear programming approaches, potentially improving causal analysis and decision-making in various applications.
Key Takeaways
- Introduces new families of monotonicity assumptions for joint probabilities.
- Formulates the bounding problem as a linear programming challenge.
- Presents a novel monotonicity assumption for better identification.
- Validates methods through numerical experiments with real-world datasets.
- Enhances understanding of causal inference in discrete treatment settings.
Statistics > Machine Learning arXiv:2602.18762 (stat) [Submitted on 21 Feb 2026] Title:Bounds and Identification of Joint Probabilities of Potential Outcomes and Observed Variables under Monotonicity Assumptions Authors:Naoya Hashimoto, Yuta Kawakami, Jin Tian View a PDF of the paper titled Bounds and Identification of Joint Probabilities of Potential Outcomes and Observed Variables under Monotonicity Assumptions, by Naoya Hashimoto and 2 other authors View PDF HTML (experimental) Abstract:Evaluating joint probabilities of potential outcomes and observed variables, and their linear combinations, is a fundamental challenge in causal inference. This paper addresses the bounding and identification of these probabilities in settings with discrete treatment and discrete ordinal outcome. We propose new families of monotonicity assumptions and formulate the bounding problem as a linear programming problem. We further introduce a new monotonicity assumption specifically to achieve identification. Finally, we present numerical experiments to validate our methods and demonstrate their application using real-world datasets. Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG) Cite as: arXiv:2602.18762 [stat.ML] (or arXiv:2602.18762v1 [stat.ML] for this version) https://doi.org/10.48550/arXiv.2602.18762 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Naoya Hashimoto [view email] [v1] Sat, 21 Feb 2026 09:00:18 UTC (92 KB) ...