[2602.24208] SenCache: Accelerating Diffusion Model Inference via Sensitivity-Aware Caching
About this article
Abstract page for arXiv paper 2602.24208: SenCache: Accelerating Diffusion Model Inference via Sensitivity-Aware Caching
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.24208 (cs) [Submitted on 27 Feb 2026] Title:SenCache: Accelerating Diffusion Model Inference via Sensitivity-Aware Caching Authors:Yasaman Haghighi, Alexandre Alahi View a PDF of the paper titled SenCache: Accelerating Diffusion Model Inference via Sensitivity-Aware Caching, by Yasaman Haghighi and 1 other authors View PDF HTML (experimental) Abstract:Diffusion models achieve state-of-the-art video generation quality, but their inference remains expensive due to the large number of sequential denoising steps. This has motivated a growing line of research on accelerating diffusion inference. Among training-free acceleration methods, caching reduces computation by reusing previously computed model outputs across timesteps. Existing caching methods rely on heuristic criteria to choose cache/reuse timesteps and require extensive tuning. We address this limitation with a principled sensitivity-aware caching framework. Specifically, we formalize the caching error through an analysis of the model output sensitivity to perturbations in the denoising inputs, i.e., the noisy latent and the timestep, and show that this sensitivity is a key predictor of caching error. Based on this analysis, we propose Sensitivity-Aware Caching (SenCache), a dynamic caching policy that adaptively selects caching timesteps on a per-sample basis. Our framework provides a theoretical basis for adaptive caching, explains why prior empir...