[2602.14342] High-accuracy log-concave sampling with stochastic queries
Summary
This paper presents a method for high-accuracy log-concave sampling using stochastic queries, achieving improved efficiency in query complexity compared to traditional convex optimization methods.
Why It Matters
The findings in this research are significant for the fields of statistics and machine learning, as they demonstrate a novel approach to log-concave sampling that can enhance the efficiency of algorithms that rely on stochastic gradients. This has implications for various applications, including optimization and data analysis, where accurate sampling is crucial.
Key Takeaways
- High-accuracy guarantees for log-concave sampling are achievable with stochastic gradients.
- The query complexity can scale as polylog(1/δ), improving efficiency.
- Light-tailed stochastic gradients are necessary for achieving high accuracy.
- The framework also extends to stochastic zeroth order queries.
- This research highlights a separation between log-concave sampling and convex optimization challenges.
Mathematics > Statistics Theory arXiv:2602.14342 (math) [Submitted on 15 Feb 2026] Title:High-accuracy log-concave sampling with stochastic queries Authors:Fan Chen, Sinho Chewi, Constantinos Daskalakis, Alexander Rakhlin View a PDF of the paper titled High-accuracy log-concave sampling with stochastic queries, by Fan Chen and 3 other authors View PDF HTML (experimental) Abstract:We show that high-accuracy guarantees for log-concave sampling -- that is, iteration and query complexities which scale as $\mathrm{poly}\log(1/\delta)$, where $\delta$ is the desired target accuracy -- are achievable using stochastic gradients with subexponential tails. Notably, this exhibits a separation with the problem of convex optimization, where stochasticity (even additive Gaussian noise) in the gradient oracle incurs $\mathrm{poly}(1/\delta)$ queries. We also give an information-theoretic argument that light-tailed stochastic gradients are necessary for high accuracy: for example, in the bounded variance case, we show that the minimax-optimal query complexity scales as $\Theta(1/\delta)$. Our framework also provides similar high accuracy guarantees under stochastic zeroth order (value) queries. Subjects: Statistics Theory (math.ST); Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG); Probability (math.PR) Cite as: arXiv:2602.14342 [math.ST] (or arXiv:2602.14342v1 [math.ST] for this version) https://doi.org/10.48550/arXiv.2602.14342 Focus to learn more arXiv-issued DOI vi...