[2603.03305] Draft-Conditioned Constrained Decoding for Structured Generation in LLMs
About this article
Abstract page for arXiv paper 2603.03305: Draft-Conditioned Constrained Decoding for Structured Generation in LLMs
Computer Science > Computation and Language arXiv:2603.03305 (cs) [Submitted on 8 Feb 2026] Title:Draft-Conditioned Constrained Decoding for Structured Generation in LLMs Authors:Avinash Reddy, Thayne T. Walker, James S. Ide, Amrit Singh Bedi View a PDF of the paper titled Draft-Conditioned Constrained Decoding for Structured Generation in LLMs, by Avinash Reddy and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly used to generate executable outputs, JSON objects, and API calls, where a single syntax error can make the output unusable. Constrained decoding enforces validity token-by-token via masking and renormalization, but it can distort generation when the model assigns low probability mass to valid continuations, pushing decoding toward locally valid yet semantically incorrect trajectories. We propose \emph{Draft-Conditioned Constrained Decoding (DCCD)}, a simple two-step, training-free inference procedure that decouples semantic planning from structural enforcement: an unconstrained draft is generated first, and constrained decoding is then applied, conditioned on this draft, to guarantee validity. We analyze DCCD through a KL-projection view, showing that draft conditioning increases feasible mass and reduces the cumulative "projection tax" induced by hard constraints, with an optional best-of-$K$ draft selection. Across structured reasoning benchmarks, DCCD improves strict structured accuracy by up to +24 percentage...