[2602.24208] SenCache: Accelerating Diffusion Model Inference via Sensitivity-Aware Caching

[2602.24208] SenCache: Accelerating Diffusion Model Inference via Sensitivity-Aware Caching

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2602.24208: SenCache: Accelerating Diffusion Model Inference via Sensitivity-Aware Caching

Computer Science > Computer Vision and Pattern Recognition arXiv:2602.24208 (cs) [Submitted on 27 Feb 2026] Title:SenCache: Accelerating Diffusion Model Inference via Sensitivity-Aware Caching Authors:Yasaman Haghighi, Alexandre Alahi View a PDF of the paper titled SenCache: Accelerating Diffusion Model Inference via Sensitivity-Aware Caching, by Yasaman Haghighi and 1 other authors View PDF HTML (experimental) Abstract:Diffusion models achieve state-of-the-art video generation quality, but their inference remains expensive due to the large number of sequential denoising steps. This has motivated a growing line of research on accelerating diffusion inference. Among training-free acceleration methods, caching reduces computation by reusing previously computed model outputs across timesteps. Existing caching methods rely on heuristic criteria to choose cache/reuse timesteps and require extensive tuning. We address this limitation with a principled sensitivity-aware caching framework. Specifically, we formalize the caching error through an analysis of the model output sensitivity to perturbations in the denoising inputs, i.e., the noisy latent and the timestep, and show that this sensitivity is a key predictor of caching error. Based on this analysis, we propose Sensitivity-Aware Caching (SenCache), a dynamic caching policy that adaptively selects caching timesteps on a per-sample basis. Our framework provides a theoretical basis for adaptive caching, explains why prior empir...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

Llms

[D] Howcome Muon is only being used for Transformers?

Muon has quickly been adopted in LLM training, yet we don't see it being talked about in other contexts. Searches for Muon on ConvNets tu...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] Run Karpathy's Autoresearch for $0.44 instead of $24 — Open-source parallel evolution pipeline on SageMaker Spot

TL;DR: I built an open-source pipeline that runs Karpathy's autoresearch on SageMaker Spot instances — 25 autonomous ML experiments for $...

Reddit - Machine Learning · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Machine Learning

[R] Are there ML approaches for prioritizing and routing “important” signals across complex systems?

I’ve been reading more about attention mechanisms in transformers and how they effectively learn to weight and prioritize relevant inputs...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime