[2410.03919] Online Posterior Sampling with a Diffusion Prior
Summary
The paper presents algorithms for online posterior sampling in contextual bandits using a diffusion model prior, enhancing the efficiency and accuracy of sampling methods compared to traditional Gaussian priors.
Why It Matters
This research is significant as it addresses limitations of Gaussian priors in complex distributions, offering a more robust framework for contextual bandits. The proposed methods can improve decision-making processes in various applications, including reinforcement learning and adaptive systems.
Key Takeaways
- Introduces approximate posterior sampling algorithms using diffusion model priors.
- Enhances the efficiency of contextual bandit problems over Gaussian priors.
- Demonstrates empirical success across various contextual bandit scenarios.
- Maintains simplicity and computational efficiency in sampling methods.
- Offers asymptotic consistency in approximations, ensuring reliability.
Computer Science > Machine Learning arXiv:2410.03919 (cs) [Submitted on 4 Oct 2024 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Online Posterior Sampling with a Diffusion Prior Authors:Branislav Kveton, Boris Oreshkin, Youngsuk Park, Aniket Deshmukh, Rui Song View a PDF of the paper titled Online Posterior Sampling with a Diffusion Prior, by Branislav Kveton and 4 other authors View PDF HTML (experimental) Abstract:Posterior sampling in contextual bandits with a Gaussian prior can be implemented exactly or approximately using the Laplace approximation. The Gaussian prior is computationally efficient but it cannot describe complex distributions. In this work, we propose approximate posterior sampling algorithms for contextual bandits with a diffusion model prior. The key idea is to sample from a chain of approximate conditional posteriors, one for each stage of the reverse diffusion process, which are obtained by the Laplace approximation. Our approximations are motivated by posterior sampling with a Gaussian prior, and inherit its simplicity and efficiency. They are asymptotically consistent and perform well empirically on a variety of contextual bandit problems. Comments: Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML) Cite as: arXiv:2410.03919 [cs.LG] (or arXiv:2410.03919v2 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2410.03919 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Branislav Kveton [view ema...