[2601.01003] Contractive Diffusion Policies: Robust Action Diffusion via Contractive Score-Based Sampling with Differential Equations
About this article
Abstract page for arXiv paper 2601.01003: Contractive Diffusion Policies: Robust Action Diffusion via Contractive Score-Based Sampling with Differential Equations
Computer Science > Machine Learning arXiv:2601.01003 (cs) [Submitted on 2 Jan 2026 (v1), last revised 21 Mar 2026 (this version, v2)] Title:Contractive Diffusion Policies: Robust Action Diffusion via Contractive Score-Based Sampling with Differential Equations Authors:Amin Abyaneh, Charlotte Morissette, Mohamad H. Danesh, Anas El Houssaini, David Meger, Gregory Dudek, Hsiu-Chin Lin View a PDF of the paper titled Contractive Diffusion Policies: Robust Action Diffusion via Contractive Score-Based Sampling with Differential Equations, by Amin Abyaneh and 6 other authors View PDF HTML (experimental) Abstract:Diffusion policies have emerged as powerful generative models for offline policy learning, whose sampling process can be rigorously characterized by a score function guiding a stochastic differential equation (SDE). However, the same score-based SDE modeling that grants diffusion policies the flexibility to learn diverse behavior also incurs solver and score-matching errors, large data requirements, and inconsistencies in action generation. While less critical in image generation, these inaccuracies compound and lead to failure in continuous control settings. We introduce contractive diffusion policies (CDPs) to induce contractive behavior in the diffusion sampling dynamics. Contraction pulls nearby flows closer to enhance robustness against solver and score-matching errors while reducing unwanted action variance. We develop an in-depth theoretical analysis along with a pr...