[2602.17375] MDP Planning as Policy Inference
Summary
This article presents a novel approach to episodic Markov decision process (MDP) planning by framing it as Bayesian inference over policies, enhancing the understanding of policy uncertainty and optimal behavior.
Why It Matters
The research addresses a critical aspect of decision-making in machine learning, particularly in reinforcement learning. By treating policies as latent variables, it provides insights into how uncertainty can be managed in MDPs, which is essential for developing more robust AI systems in complex environments.
Key Takeaways
- MDP planning can be effectively framed as Bayesian inference over policies.
- The proposed method captures policy uncertainty, enhancing decision-making in stochastic environments.
- Variational sequential Monte Carlo is adapted for policy inference in discrete domains.
- The approach is validated through comparisons with existing methods like Soft Actor-Critic.
- Understanding policy distributions can lead to better performance in various applications.
Computer Science > Machine Learning arXiv:2602.17375 (cs) [Submitted on 19 Feb 2026] Title:MDP Planning as Policy Inference Authors:David Tolpin View a PDF of the paper titled MDP Planning as Policy Inference, by David Tolpin View PDF HTML (experimental) Abstract:We cast episodic Markov decision process (MDP) planning as Bayesian inference over _policies_. A policy is treated as the latent variable and is assigned an unnormalized probability of optimality that is monotone in its expected return, yielding a posterior distribution whose modes coincide with return-maximizing solutions while posterior dispersion represents uncertainty over optimal behavior. To approximate this posterior in discrete domains, we adapt variational sequential Monte Carlo (VSMC) to inference over deterministic policies under stochastic dynamics, introducing a sweep that enforces policy consistency across revisited states and couples transition randomness across particles to avoid confounding from simulator noise. Acting is performed by posterior predictive sampling, which induces a stochastic control policy through a Thompson-sampling interpretation rather than entropy regularization. Across grid worlds, Blackjack, Triangle Tireworld, and Academic Advising, we analyze the structure of inferred policy distributions and compare the resulting behavior to discrete Soft Actor-Critic, highlighting qualitative and statistical differences that arise from policy-level uncertainty. Comments: Subjects: Machin...