[2604.03603] Stochastic Generative Plug-and-Play Priors

[2604.03603] Stochastic Generative Plug-and-Play Priors

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2604.03603: Stochastic Generative Plug-and-Play Priors

Computer Science > Computer Vision and Pattern Recognition arXiv:2604.03603 (cs) [Submitted on 4 Apr 2026] Title:Stochastic Generative Plug-and-Play Priors Authors:Chicago Y. Park, Edward P. Chandler, Yuyang Hu, Michael T. McCann, Cristina Garcia-Cardona, Brendt Wohlberg, Ulugbek S. Kamilov View a PDF of the paper titled Stochastic Generative Plug-and-Play Priors, by Chicago Y. Park and 6 other authors View PDF HTML (experimental) Abstract:Plug-and-play (PnP) methods are widely used for solving imaging inverse problems by incorporating a denoiser into optimization algorithms. Score-based diffusion models (SBDMs) have recently demonstrated strong generative performance through a denoiser trained across a wide range of noise levels. Despite their shared reliance on denoisers, it remains unclear how to systematically use SBDMs as priors within the PnP framework without relying on reverse diffusion sampling. In this paper, we establish a score-based interpretation of PnP that justifies using pretrained SBDMs directly within PnP algorithms. Building on this connection, we introduce a stochastic generative PnP (SGPnP) framework that injects noise to better leverage the expressive generative SBDM priors, thereby improving robustness in severely ill-posed inverse problems. We provide a new theory showing that this noise injection induces optimization on a Gaussian-smoothed objective and promotes escape from strict saddle points. Experiments on challenging inverse tasks, such as mu...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

[2603.12365] Optimal Experimental Design for Reliable Learning of History-Dependent Constitutive Laws
Machine Learning

[2603.12365] Optimal Experimental Design for Reliable Learning of History-Dependent Constitutive Laws

Abstract page for arXiv paper 2603.12365: Optimal Experimental Design for Reliable Learning of History-Dependent Constitutive Laws

arXiv - Machine Learning · 4 min ·
[2603.17573] HeiSD: Hybrid Speculative Decoding for Embodied Vision-Language-Action Models with Kinematic Awareness
Machine Learning

[2603.17573] HeiSD: Hybrid Speculative Decoding for Embodied Vision-Language-Action Models with Kinematic Awareness

Abstract page for arXiv paper 2603.17573: HeiSD: Hybrid Speculative Decoding for Embodied Vision-Language-Action Models with Kinematic Aw...

arXiv - Machine Learning · 4 min ·
[2512.20562] Shallow Neural Networks Learn Low-Degree Spherical Polynomials with Feature Learning by Learnable Channel Attention
Machine Learning

[2512.20562] Shallow Neural Networks Learn Low-Degree Spherical Polynomials with Feature Learning by Learnable Channel Attention

Abstract page for arXiv paper 2512.20562: Shallow Neural Networks Learn Low-Degree Spherical Polynomials with Feature Learning by Learnab...

arXiv - Machine Learning · 4 min ·
[2603.07475] A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs
Llms

[2603.07475] A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs

Abstract page for arXiv paper 2603.07475: A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs

arXiv - Machine Learning · 3 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime