[2602.12468] Continuous Diffusion Models Can Obey Formal Syntax
Summary
The paper introduces a method for guiding continuous diffusion models to adhere to formal syntactic constraints, achieving high constraint satisfaction rates while maintaining output quality.
Why It Matters
This research addresses a significant challenge in natural language processing by improving the ability of diffusion models to generate outputs that conform to specific syntactic rules. This advancement can enhance the reliability of AI-generated content in applications requiring structured data formats, such as JSON.
Key Takeaways
- Introduces a training-free method for guiding diffusion models to meet formal syntax constraints.
- Achieves 68-96% satisfaction of regular-expression constraints with minimal perplexity cost.
- Outperforms traditional autoregressive models in both constraint satisfaction and output quality.
Computer Science > Machine Learning arXiv:2602.12468 (cs) [Submitted on 12 Feb 2026] Title:Continuous Diffusion Models Can Obey Formal Syntax Authors:Jinwoo Kim, Taylor Berg-Kirkpatrick, Loris D'Antoni View a PDF of the paper titled Continuous Diffusion Models Can Obey Formal Syntax, by Jinwoo Kim and 2 other authors View PDF Abstract:Diffusion language models offer a promising alternative to autoregressive models due to their global, non-causal generation process, but their continuous latent dynamics make discrete constraints -- e.g., the output should be a JSON file that matches a given schema -- difficult to impose. We introduce a training-free guidance method for steering continuous diffusion language models to satisfy formal syntactic constraints expressed using regular expressions. Our approach constructs an analytic score estimating the probability that a latent state decodes to a valid string accepted by a given regular expression, and uses its gradient to guide sampling, without training auxiliary classifiers. The denoising process targets the base model conditioned on syntactic validity. We implement our method in Diffinity on top of the PLAID diffusion model and evaluate it on 180 regular-expression constraints over JSON and natural-language benchmarks. Diffinity achieves 68-96\% constraint satisfaction while incurring only a small perplexity cost relative to unconstrained sampling, outperforming autoregressive constrained decoding in both constraint satisfactio...