[2510.08431] Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency

[2510.08431] Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a novel approach to large-scale diffusion distillation using a score-regularized continuous-time consistency model, addressing challenges in generating high-quality images and videos.

Why It Matters

As machine learning applications increasingly require high-quality image and video generation, this research offers a scalable solution that enhances visual fidelity while maintaining diversity, which is crucial for practical implementations in various AI fields.

Key Takeaways

  • Introduces score-regularized continuous-time consistency model (rCM) for improved image and video generation.
  • Demonstrates significant enhancements in visual quality and diversity over existing methods.
  • Achieves high-fidelity sample generation in fewer steps, accelerating diffusion sampling by up to 50 times.
  • Validates effectiveness on large models with over 10 billion parameters.
  • Provides a theoretically grounded framework for advancing large-scale diffusion distillation.

Computer Science > Computer Vision and Pattern Recognition arXiv:2510.08431 (cs) [Submitted on 9 Oct 2025 (v1), last revised 15 Feb 2026 (this version, v2)] Title:Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency Authors:Kaiwen Zheng, Yuji Wang, Qianli Ma, Huayu Chen, Jintao Zhang, Yogesh Balaji, Jianfei Chen, Ming-Yu Liu, Jun Zhu, Qinsheng Zhang View a PDF of the paper titled Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency, by Kaiwen Zheng and 9 other authors View PDF HTML (experimental) Abstract:Although continuous-time consistency models (e.g., sCM, MeanFlow) are theoretically principled and empirically powerful for fast academic-scale diffusion, its applicability to large-scale text-to-image and video tasks remains unclear due to infrastructure challenges in Jacobian-vector product (JVP) computation and the limitations of evaluation benchmarks like FID. This work represents the first effort to scale up continuous-time consistency to general application-level image and video diffusion models, and to make JVP-based distillation effective at large scale. We first develop a parallelism-compatible FlashAttention-2 JVP kernel, enabling sCM training on models with over 10 billion parameters and high-dimensional video tasks. Our investigation reveals fundamental quality limitations of sCM in fine-detail generation, which we attribute to error accumulation and the "mode-covering" nature of its forward-diver...

Related Articles

Llms

My AI spent last night modifying its own codebase

I've been working on a local AI system called Apis that runs completely offline through Ollama. During a background run, Apis identified ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Fake users generated by AI can't simulate humans — review of 182 research papers. Your thoughts?

https://www.researchsquare.com/article/rs-9057643/v1 There’s a massive trend right now where tech companies, businesses, even researchers...

Reddit - Artificial Intelligence · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Accelerating science with AI and simulations
Machine Learning

Accelerating science with AI and simulations

MIT Professor Rafael Gómez-Bombarelli discusses the transformative potential of AI in scientific research, emphasizing its role in materi...

AI News - General · 10 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime