[2511.09219] Planning in Branch-and-Bound: Model-Based Reinforcement Learning for Exact Combinatorial Optimization
About this article
Abstract page for arXiv paper 2511.09219: Planning in Branch-and-Bound: Model-Based Reinforcement Learning for Exact Combinatorial Optimization
Computer Science > Machine Learning arXiv:2511.09219 (cs) [Submitted on 12 Nov 2025 (v1), last revised 2 Apr 2026 (this version, v4)] Title:Planning in Branch-and-Bound: Model-Based Reinforcement Learning for Exact Combinatorial Optimization Authors:Paul Strang, Zacharie Alès, Côme Bissuel, Olivier Juan, Safia Kedad-Sidhoum, Emmanuel Rachelson View a PDF of the paper titled Planning in Branch-and-Bound: Model-Based Reinforcement Learning for Exact Combinatorial Optimization, by Paul Strang and 5 other authors View PDF HTML (experimental) Abstract:Mixed-Integer Linear Programming (MILP) lies at the core of many real-world combinatorial optimization (CO) problems, traditionally solved by branch-and-bound (B&B). A key driver influencing B&B solvers efficiency is the variable selection heuristic that guides branching decisions. Looking to move beyond static, hand-crafted heuristics, recent work has explored adapting traditional reinforcement learning (RL) algorithms to the B&B setting, aiming to learn branching strategies tailored to specific MILP distributions. In parallel, RL agents have achieved remarkable success in board games, a very specific type of combinatorial problems, by leveraging environment simulators to plan via Monte Carlo Tree Search (MCTS). Building on these developments, we introduce Plan-and-Branch-and-Bound (PlanB&B), a model-based reinforcement learning (MBRL) agent that leverages a learned internal model of the B&B dynamics to discover improved branchin...