[2603.17677] Adaptive Guidance for Retrieval-Augmented Masked Diffusion Models
About this article
Abstract page for arXiv paper 2603.17677: Adaptive Guidance for Retrieval-Augmented Masked Diffusion Models
Computer Science > Computation and Language arXiv:2603.17677 (cs) [Submitted on 18 Mar 2026 (v1), last revised 3 Apr 2026 (this version, v2)] Title:Adaptive Guidance for Retrieval-Augmented Masked Diffusion Models Authors:Jaemin Kim, Jong Chul Ye View a PDF of the paper titled Adaptive Guidance for Retrieval-Augmented Masked Diffusion Models, by Jaemin Kim and 1 other authors View PDF HTML (experimental) Abstract:Retrieval-Augmented Generation (RAG) improves factual grounding by incorporating external knowledge into language model generation. However, when retrieved context is noisy, unreliable, or inconsistent with the model's parametric knowledge, it introduces retrieval-prior conflicts that can degrade generation quality. While this problem has been studied in autoregressive language models, it remains largely unexplored in diffusion-based language models, where the iterative denoising process introduces unique challenges for integrating retrieved context. In this work, we propose Adaptive Retrieval-Augmented Masked Diffusion (ARAM), a training-free adaptive guidance framework for Masked Diffusion Models (MDMs) in RAG settings. ARAM dynamically calibrates the guidance scale during denoising according to the Signal-to-Noise Ratio (SNR) of the distributional shift induced by retrieved context. Intuitively, the model strengthens guidance when the retrieved context provides reliable corrective evidence and suppresses it when the contextual signal is noisy or non-supportive....