[2602.07738] Learnable Chernoff Baselines for Inference-Time Alignment
Summary
The paper introduces Learnable Chernoff Baselines (LCBs) for efficient inference-time reward-guided alignment in generative models, improving sampling efficiency and control over inference-compute scaling.
Why It Matters
This research addresses the challenges of existing methods that are either architecture-specific or computationally expensive. By proposing LCBs, the authors provide a novel approach that enhances the efficiency of generative models, which is crucial for real-time applications in AI and machine learning.
Key Takeaways
- LCBs enable efficient sampling from KL-regularized reward alignment.
- The method uses black-box sampling to enhance model inference.
- Total-variation guarantees ensure alignment with ideal models.
- LCBs significantly reduce the number of queries needed for effective sampling.
- The approach is applicable in both continuous and discrete diffusion settings.
Computer Science > Machine Learning arXiv:2602.07738 (cs) [Submitted on 8 Feb 2026 (v1), last revised 13 Feb 2026 (this version, v2)] Title:Learnable Chernoff Baselines for Inference-Time Alignment Authors:Sunil Madhow, Yuchen Liang, Ness Shroff, Yingbin Liang, Yu-Xiang Wang View a PDF of the paper titled Learnable Chernoff Baselines for Inference-Time Alignment, by Sunil Madhow and 4 other authors View PDF Abstract:We study inference-time reward-guided alignment for generative models. Existing methods often rely on either architecture-specific adaptations or computationally costly inference procedures. We introduce Learnable Chernoff Baselines (LCBs) as a method for efficiently and approximately sampling from the exponentially tilted kernels that arise from KL-regularized reward alignment. Using only black-box sampling access to the pretrained model, LCBs implement a form of rejection sampling with adaptively selected acceptance probabilities, which allows fine-grained control over inference-compute scaling. We establish total-variation guarantees to the ideal aligned model, and demonstrate in both continuous and discrete diffusion settings that LCB sampling closely matches ideal rejection sampling while using substantially fewer queries to the pretrained model. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.07738 [cs.LG] (or arXiv:2602.07738v2 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2602.07738 Focus to learn m...