[2602.14472] Frequentist Regret Analysis of Gaussian Process Thompson Sampling via Fractional Posteriors

[2602.14472] Frequentist Regret Analysis of Gaussian Process Thompson Sampling via Fractional Posteriors

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a frequentist regret analysis of Gaussian Process Thompson Sampling (GP-TS) using fractional posteriors, offering a unified framework that avoids discretization and provides kernel-agnostic regret bounds.

Why It Matters

The findings enhance the understanding of Gaussian Process Thompson Sampling, a key method in machine learning for decision-making. By providing a framework that is independent of discretization, this research broadens the applicability of GP-TS across various kernel classes, making it significant for both theoretical and practical advancements in the field.

Key Takeaways

  • Introduces a novel frequentist regret analysis for GP-TS using fractional posteriors.
  • Establishes kernel-agnostic regret bounds applicable across different kernel classes.
  • Demonstrates that variance inflation in GP-TS can be interpreted through fractional posteriors.
  • Identifies conditions under which the posterior contraction rate can be controlled.
  • Recovers known regret bounds for specific kernels as special cases of the general framework.

Mathematics > Statistics Theory arXiv:2602.14472 (math) [Submitted on 16 Feb 2026] Title:Frequentist Regret Analysis of Gaussian Process Thompson Sampling via Fractional Posteriors Authors:Somjit Roy, Prateek Jaiswal, Anirban Bhattacharya, Debdeep Pati, Bani K. Mallick View a PDF of the paper titled Frequentist Regret Analysis of Gaussian Process Thompson Sampling via Fractional Posteriors, by Somjit Roy and 4 other authors View PDF Abstract:We study Gaussian Process Thompson Sampling (GP-TS) for sequential decision-making over compact, continuous action spaces and provide a frequentist regret analysis based on fractional Gaussian process posteriors, without relying on domain discretization as in prior work. We show that the variance inflation commonly assumed in existing analyses of GP-TS can be interpreted as Thompson Sampling with respect to a fractional posterior with tempering parameter $\alpha \in (0,1)$. We derive a kernel-agnostic regret bound expressed in terms of the information gain parameter $\gamma_t$ and the posterior contraction rate $\epsilon_t$, and identify conditions on the Gaussian process prior under which $\epsilon_t$ can be controlled. As special cases of our general bound, we recover regret of order $\tilde{\mathcal{O}}(T^{\frac{1}{2}})$ for the squared exponential kernel, $\tilde{\mathcal{O}}(T^{\frac{2\nu+3d}{2(2\nu+d)}} )$ for the Matérn-$\nu$ kernel, and a bound of order $\tilde{\mathcal{O}}(T^{\frac{2\nu+3d}{2(2\nu+d)}})$ for the rational quadr...

Related Articles

Machine Learning

Danger Words - Where Words Are Weapons

Every profession has its danger words - small words that carry hidden judgements while pretending to be neutral. I learned to hear them w...

Reddit - Artificial Intelligence · 1 min ·
The Download: an exclusive Jeff VanderMeer story and AI models too scary to release | MIT Technology Review
Machine Learning

The Download: an exclusive Jeff VanderMeer story and AI models too scary to release | MIT Technology Review

OpenAI has joined Anthropic in restricting an AI model's release over security fears.

MIT Technology Review - AI · 4 min ·
Llms

What's your "When Language Model AI can do X, I'll be impressed"?

I have two at the top of my mind: When it can read musical notes. I will be mildly impressed when I can paste in a picture of musical not...

Reddit - Artificial Intelligence · 1 min ·
Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice | WIRED
Machine Learning

Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice | WIRED

Meta’s Muse Spark model offers to analyze users’ health data, including lab results. Beyond the obvious privacy risks, it’s not a capable...

Wired - AI · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime