[2410.03919] Online Posterior Sampling with a Diffusion Prior

[2410.03919] Online Posterior Sampling with a Diffusion Prior

arXiv - Machine Learning 3 min read Article

Summary

The paper presents algorithms for online posterior sampling in contextual bandits using a diffusion model prior, enhancing the efficiency and accuracy of sampling methods compared to traditional Gaussian priors.

Why It Matters

This research is significant as it addresses limitations of Gaussian priors in complex distributions, offering a more robust framework for contextual bandits. The proposed methods can improve decision-making processes in various applications, including reinforcement learning and adaptive systems.

Key Takeaways

  • Introduces approximate posterior sampling algorithms using diffusion model priors.
  • Enhances the efficiency of contextual bandit problems over Gaussian priors.
  • Demonstrates empirical success across various contextual bandit scenarios.
  • Maintains simplicity and computational efficiency in sampling methods.
  • Offers asymptotic consistency in approximations, ensuring reliability.

Computer Science > Machine Learning arXiv:2410.03919 (cs) [Submitted on 4 Oct 2024 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Online Posterior Sampling with a Diffusion Prior Authors:Branislav Kveton, Boris Oreshkin, Youngsuk Park, Aniket Deshmukh, Rui Song View a PDF of the paper titled Online Posterior Sampling with a Diffusion Prior, by Branislav Kveton and 4 other authors View PDF HTML (experimental) Abstract:Posterior sampling in contextual bandits with a Gaussian prior can be implemented exactly or approximately using the Laplace approximation. The Gaussian prior is computationally efficient but it cannot describe complex distributions. In this work, we propose approximate posterior sampling algorithms for contextual bandits with a diffusion model prior. The key idea is to sample from a chain of approximate conditional posteriors, one for each stage of the reverse diffusion process, which are obtained by the Laplace approximation. Our approximations are motivated by posterior sampling with a Gaussian prior, and inherit its simplicity and efficiency. They are asymptotically consistent and perform well empirically on a variety of contextual bandit problems. Comments: Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML) Cite as: arXiv:2410.03919 [cs.LG]   (or arXiv:2410.03919v2 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2410.03919 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Branislav Kveton [view ema...

Related Articles

Machine Learning

[D] Is this considered unsupervised or semi-supervised learning in anomaly detection?

Hi 👋🏼, I’m working on an anomaly detection setup and I’m a bit unsure how to correctly describe it from a learning perspective. The model...

Reddit - Machine Learning · 1 min ·
Machine Learning

Serious question. Did a transformer just describe itself and the universe and build itself a Shannon limit framework?

The Multiplicative Lattice as the Natural Basis for Positional Encoding Knack 2026 | Draft v6.0 Abstract We show that the apparent tradeo...

Reddit - Artificial Intelligence · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime