[2602.14342] High-accuracy log-concave sampling with stochastic queries

[2602.14342] High-accuracy log-concave sampling with stochastic queries

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a method for high-accuracy log-concave sampling using stochastic queries, achieving improved efficiency in query complexity compared to traditional convex optimization methods.

Why It Matters

The findings in this research are significant for the fields of statistics and machine learning, as they demonstrate a novel approach to log-concave sampling that can enhance the efficiency of algorithms that rely on stochastic gradients. This has implications for various applications, including optimization and data analysis, where accurate sampling is crucial.

Key Takeaways

  • High-accuracy guarantees for log-concave sampling are achievable with stochastic gradients.
  • The query complexity can scale as polylog(1/δ), improving efficiency.
  • Light-tailed stochastic gradients are necessary for achieving high accuracy.
  • The framework also extends to stochastic zeroth order queries.
  • This research highlights a separation between log-concave sampling and convex optimization challenges.

Mathematics > Statistics Theory arXiv:2602.14342 (math) [Submitted on 15 Feb 2026] Title:High-accuracy log-concave sampling with stochastic queries Authors:Fan Chen, Sinho Chewi, Constantinos Daskalakis, Alexander Rakhlin View a PDF of the paper titled High-accuracy log-concave sampling with stochastic queries, by Fan Chen and 3 other authors View PDF HTML (experimental) Abstract:We show that high-accuracy guarantees for log-concave sampling -- that is, iteration and query complexities which scale as $\mathrm{poly}\log(1/\delta)$, where $\delta$ is the desired target accuracy -- are achievable using stochastic gradients with subexponential tails. Notably, this exhibits a separation with the problem of convex optimization, where stochasticity (even additive Gaussian noise) in the gradient oracle incurs $\mathrm{poly}(1/\delta)$ queries. We also give an information-theoretic argument that light-tailed stochastic gradients are necessary for high accuracy: for example, in the bounded variance case, we show that the minimax-optimal query complexity scales as $\Theta(1/\delta)$. Our framework also provides similar high accuracy guarantees under stochastic zeroth order (value) queries. Subjects: Statistics Theory (math.ST); Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG); Probability (math.PR) Cite as: arXiv:2602.14342 [math.ST]   (or arXiv:2602.14342v1 [math.ST] for this version)   https://doi.org/10.48550/arXiv.2602.14342 Focus to learn more arXiv-issued DOI vi...

Related Articles

Machine Learning

Danger Words - Where Words Are Weapons

Every profession has its danger words - small words that carry hidden judgements while pretending to be neutral. I learned to hear them w...

Reddit - Artificial Intelligence · 1 min ·
The Download: an exclusive Jeff VanderMeer story and AI models too scary to release | MIT Technology Review
Machine Learning

The Download: an exclusive Jeff VanderMeer story and AI models too scary to release | MIT Technology Review

OpenAI has joined Anthropic in restricting an AI model's release over security fears.

MIT Technology Review - AI · 4 min ·
Llms

What's your "When Language Model AI can do X, I'll be impressed"?

I have two at the top of my mind: When it can read musical notes. I will be mildly impressed when I can paste in a picture of musical not...

Reddit - Artificial Intelligence · 1 min ·
Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice | WIRED
Machine Learning

Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice | WIRED

Meta’s Muse Spark model offers to analyze users’ health data, including lab results. Beyond the obvious privacy risks, it’s not a capable...

Wired - AI · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime