[2510.02983] Oracle-based Uniform Sampling from Convex Bodies
Summary
This paper introduces new Markov chain Monte Carlo algorithms for uniform sampling from convex bodies, leveraging a restricted Gaussian oracle for enhanced efficiency.
Why It Matters
The research addresses a significant challenge in computational geometry and optimization by providing efficient algorithms for uniform sampling, which is crucial for various applications in machine learning and data analysis. The results have implications for improving sampling techniques in high-dimensional spaces.
Key Takeaways
- Introduces efficient algorithms for uniform sampling from convex bodies.
- Utilizes a restricted Gaussian oracle to enhance sampling methods.
- Provides non-asymptotic complexity guarantees for unbiased samples.
- Supports theoretical findings with numerical experiments.
- Expands the applicability of sampling techniques in convex optimization.
Computer Science > Data Structures and Algorithms arXiv:2510.02983 (cs) [Submitted on 3 Oct 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Oracle-based Uniform Sampling from Convex Bodies Authors:Thanh Dang, Jiaming Liang View a PDF of the paper titled Oracle-based Uniform Sampling from Convex Bodies, by Thanh Dang and 1 other authors View PDF HTML (experimental) Abstract:We propose new Markov chain Monte Carlo algorithms to sample a uniform distribution on a convex body $K$. Our algorithms are based on the proximal sampler, which uses Gibbs sampling on an augmented distribution and assumes access to the so-called restricted Gaussian oracle (RGO). The key contribution of this work is an efficient implementation of the RGO for uniform sampling on convex $K$ that goes beyond the membership-oracle model used in many classical and modern uniform samplers, and instead leverages richer oracle access commonly assumed in convex optimization. We implement the RGO via rejection sampling and access to either a projection oracle or a separation oracle on $K$. In both oracle models, we provide non-asymptotic complexity guarantees for obtaining unbiased samples, with accuracy quantified in Rényi divergence and $\chi^2$-divergence, and we support these theoretical guarantees with numerical experiments. Comments: Subjects: Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML) Cite as: arXiv:2510.0298...