[2411.01629] Denoising Diffusions with Optimal Transport: Localization, Curvature, and Multi-Scale Complexity

[2411.01629] Denoising Diffusions with Optimal Transport: Localization, Curvature, and Multi-Scale Complexity

arXiv - Machine Learning 4 min read Article

Summary

This paper explores denoising diffusions using optimal transport, focusing on localization, curvature, and multi-scale complexity in generative models.

Why It Matters

Understanding the mechanics of denoising in diffusion models is crucial for advancements in machine learning. This research provides insights into the complexities of denoising processes, which can enhance model performance in various applications, from image generation to data reconstruction.

Key Takeaways

  • Denoising in diffusion models is influenced by the curvature function, affecting localization uncertainty.
  • The paper introduces a multi-scale curvature complexity that quantifies denoising difficulty based on signal-to-noise ratios.
  • The diffuse-then-denoise process is characterized by the net contraction of the forward diffusion chain.
  • Denoising is easier when the integrated tail function of curvature is light, indicating favorable conditions for reconstruction.
  • Non-log-concave examples illustrate the practical implications of multi-scale complexity in denoising tasks.

Statistics > Machine Learning arXiv:2411.01629 (stat) [Submitted on 3 Nov 2024 (v1), last revised 14 Feb 2026 (this version, v2)] Title:Denoising Diffusions with Optimal Transport: Localization, Curvature, and Multi-Scale Complexity Authors:Tengyuan Liang, Kulunu Dharmakeerthi, Takuya Koriyama View a PDF of the paper titled Denoising Diffusions with Optimal Transport: Localization, Curvature, and Multi-Scale Complexity, by Tengyuan Liang and 2 other authors View PDF HTML (experimental) Abstract:Adding noise is easy; what about denoising? Diffusion is easy; what about reverting a diffusion? Diffusion-based generative models aim to denoise a Langevin diffusion chain, moving from a log-concave equilibrium measure $\nu$, say an isotropic Gaussian, back to a complex, possibly non-log-concave initial measure $\mu$. The score function performs denoising, moving backward in time, and predicting the conditional mean of the past location given the current one. We show that score denoising is the optimal backward map in transportation cost. What is its localization uncertainty? We show that the curvature function determines this localization uncertainty, measured as the conditional variance of the past location given the current. We study in this paper the effectiveness of the diffuse-then-denoise process: the contraction of the forward diffusion chain, offset by the possible expansion of the backward denoising chain, governs the denoising difficulty. For any initial measure $\mu$, w...

Related Articles

Llms

[R] Hybrid attention for small code models: 50x faster inference, but data scaling still dominates

TLDR: Forked pytorch and triton internals . Changed attention so its linear first layer , middle quadratic layer, last linear layer Infer...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
AI Hiring Growth: AI and ML Hiring Surges 37% in Marche
Machine Learning

AI Hiring Growth: AI and ML Hiring Surges 37% in Marche

AI News - General · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime