[2602.16570] Steering diffusion models with quadratic rewards: a fine-grained analysis
Summary
This article presents a detailed analysis of sampling from reward-tilted diffusion models, focusing on quadratic rewards and their computational tractability.
Why It Matters
Understanding the efficiency of sampling methods in diffusion models is crucial for advancing machine learning applications, particularly in generative tasks. This research highlights the limitations and potential improvements in existing heuristics, paving the way for more effective algorithms in various applications.
Key Takeaways
- Linear-reward tilts are efficiently sampleable, a previously overlooked result.
- The study introduces a new algorithm for sampling from low-rank positive-definite quadratic tilts.
- Negative-definite tilts present intractability challenges, even with minimal rank.
Computer Science > Machine Learning arXiv:2602.16570 (cs) [Submitted on 18 Feb 2026] Title:Steering diffusion models with quadratic rewards: a fine-grained analysis Authors:Ankur Moitra, Andrej Risteski, Dhruv Rohatgi View a PDF of the paper titled Steering diffusion models with quadratic rewards: a fine-grained analysis, by Ankur Moitra and 2 other authors View PDF HTML (experimental) Abstract:Inference-time algorithms are an emerging paradigm in which pre-trained models are used as subroutines to solve downstream tasks. Such algorithms have been proposed for tasks ranging from inverse problems and guided image generation to reasoning. However, the methods currently deployed in practice are heuristics with a variety of failure modes -- and we have very little understanding of when these heuristics can be efficiently improved. In this paper, we consider the task of sampling from a reward-tilted diffusion model -- that is, sampling from $p^{\star}(x) \propto p(x) \exp(r(x))$ -- given a reward function $r$ and pre-trained diffusion oracle for $p$. We provide a fine-grained analysis of the computational tractability of this task for quadratic rewards $r(x) = x^\top A x + b^\top x$. We show that linear-reward tilts are always efficiently sampleable -- a simple result that seems to have gone unnoticed in the literature. We use this as a building block, along with a conceptually new ingredient -- the Hubbard-Stratonovich transform -- to provide an efficient algorithm for samplin...