[2601.19961] MeanCache: From Instantaneous to Average Velocity for Accelerating Flow Matching Inference
About this article
Abstract page for arXiv paper 2601.19961: MeanCache: From Instantaneous to Average Velocity for Accelerating Flow Matching Inference
Computer Science > Machine Learning arXiv:2601.19961 (cs) [Submitted on 27 Jan 2026 (v1), last revised 28 Feb 2026 (this version, v2)] Title:MeanCache: From Instantaneous to Average Velocity for Accelerating Flow Matching Inference Authors:Huanlin Gao, Ping Chen, Fuyuan Shi, Ruijia Wu, Li YanTao, Qiang Hui, Yuren You, Ting Lu, Chao Tan, Shaoan Zhao, Zhaoxiang Liu, Fang Zhao, Kai Wang, Shiguo Lian View a PDF of the paper titled MeanCache: From Instantaneous to Average Velocity for Accelerating Flow Matching Inference, by Huanlin Gao and 13 other authors View PDF HTML (experimental) Abstract:We present MeanCache, a training-free caching framework for efficient Flow Matching inference. Existing caching methods reduce redundant computation but typically rely on instantaneous velocity information (e.g., feature caching), which often leads to severe trajectory deviations and error accumulation under high acceleration ratios. MeanCache introduces an average-velocity perspective: by leveraging cached Jacobian--vector products (JVP) to construct interval average velocities from instantaneous velocities, it effectively mitigates local error accumulation. To further improve cache timing and JVP reuse stability, we develop a trajectory-stability scheduling strategy as a practical tool, employing a Peak-Suppressed Shortest Path under budget constraints to determine the schedule. Experiments on FLUX.1, Qwen-Image, and HunyuanVideo demonstrate that MeanCache achieves 4.12X and 4.56X and ...