[2602.18449] Prompt Optimization Via Diffusion Language Models
Summary
The paper presents a novel diffusion-based framework for optimizing prompts in language models, enhancing performance through iterative refinement without modifying the model itself.
Why It Matters
This research addresses the challenge of prompt optimization in large language models (LLMs), offering a scalable and model-agnostic approach that can significantly improve LLM performance. As AI applications grow, effective prompt management becomes crucial for maximizing model utility and user satisfaction.
Key Takeaways
- Introduces a diffusion-based method for prompt optimization.
- Enables flexible updates to prompts without requiring model changes.
- Demonstrates improved performance on various benchmarks.
- Highlights the importance of diffusion step counts for balance in optimization.
- Offers a scalable solution applicable across different LLMs.
Computer Science > Computation and Language arXiv:2602.18449 (cs) [Submitted on 30 Jan 2026] Title:Prompt Optimization Via Diffusion Language Models Authors:Shiyu Wang, Haolin Chen, Liangwei Yang, Jielin Qiu, Rithesh Murthy, Ming Zhu, Zixiang Chen, Silvio Savarese, Caiming Xiong, Shelby Heinecke, Huan Wang View a PDF of the paper titled Prompt Optimization Via Diffusion Language Models, by Shiyu Wang and 10 other authors View PDF HTML (experimental) Abstract:We propose a diffusion-based framework for prompt optimization that leverages Diffusion Language Models (DLMs) to iteratively refine system prompts through masked denoising. By conditioning on interaction traces, including user queries, model responses, and optional feedback, our method enables flexible, span-level prompt updates without requiring gradient access or modifying the downstream language model. Across diverse benchmarks (e.g., $\tau$-bench, SST-2, SST-5), DLM-optimized prompts consistently improve the performance of a frozen target LLM (e.g., GPT-4o-mini). We further show that moderate diffusion step counts provide the best balance between refinement quality and stability. These results highlight diffusion-based prompt optimization as a general, model-agnostic, and scalable approach for enhancing LLM performance through iterative prompt refinement. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2602.18449 [cs.CL] (or arXiv:2602.18449v1 ...