[2510.04455] Inverse Mixed-Integer Programming: Learning Constraints then Objective Functions
Summary
This article presents a novel two-stage approach to inverse mixed-integer programming that learns both constraints and objective functions from data, addressing a significant gap in existing optimization methods.
Why It Matters
Understanding how to effectively learn constraints and objective functions is crucial for improving mathematical modeling in various fields such as power systems and scheduling. This research expands the capabilities of inverse optimization, potentially leading to more accurate and adaptable models in real-world applications.
Key Takeaways
- Introduces a two-stage method for learning constraints and objective functions in inverse optimization.
- Provides theoretical guarantees for the proposed approach using statistical learning tools.
- Demonstrates practical applications on scheduling problems with significant decision variables.
Mathematics > Optimization and Control arXiv:2510.04455 (math) [Submitted on 6 Oct 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Inverse Mixed-Integer Programming: Learning Constraints then Objective Functions Authors:Akira Kitaoka View a PDF of the paper titled Inverse Mixed-Integer Programming: Learning Constraints then Objective Functions, by Akira Kitaoka View PDF HTML (experimental) Abstract:Data-driven inverse optimization for mixed-integer linear programs (MILPs), which seeks to learn an objective function and constraints consistent with observed decisions, is important for building accurate mathematical models in a variety of domains, including power systems and scheduling. However, to the best of our knowledge, existing data-driven inverse optimization methods primarily focus on learning objective functions under known constraints, and learning both objective functions and constraints from data remains largely unexplored. In this paper, we propose a two-stage approach for a class of inverse optimization problems in which the objective is a linear combination of given feature functions and the constraints are parameterized by unknown functions and thresholds. Our method first learns the constraints and then, conditioned on the learned constraints, estimates the objective-function weights. On the theoretical side, we provide finite-sample guarantees for solving the proposed inverse optimization problem. To this end, we develop statistical learning to...