[2602.00834] Don't Forget Its Variance! The Minimum Path Variance Principle for Accurate and Stable Score-Based Models
Summary
This paper introduces the Minimum Path Variance (MinPV) Principle, addressing the paradox of score-based methods in machine learning by minimizing path variance for improved model accuracy and stability.
Why It Matters
The MinPV Principle offers a significant advancement in optimizing score-based models, which are widely used in machine learning. By addressing the overlooked path variance, this research enhances the reliability and performance of these models, making it crucial for practitioners in the field.
Key Takeaways
- The MinPV Principle minimizes path variance in score-based models.
- A closed-form expression for path variance enables tractable optimization.
- The method adapts to data without manual selection, improving accuracy.
- Establishes new state-of-the-art results on challenging benchmarks.
- Provides a general framework for optimizing score-based interpolation.
Computer Science > Machine Learning arXiv:2602.00834 (cs) [Submitted on 31 Jan 2026 (v1), last revised 17 Feb 2026 (this version, v2)] Title:Don't Forget Its Variance! The Minimum Path Variance Principle for Accurate and Stable Score-Based Models Authors:Wei Chen, Jiacheng Li, Shigui Li, Zhiqi Lin, Junmei Yang, John Paisley, Delu Zeng View a PDF of the paper titled Don't Forget Its Variance! The Minimum Path Variance Principle for Accurate and Stable Score-Based Models, by Wei Chen and 6 other authors View PDF HTML (experimental) Abstract:Score-based methods are powerful across machine learning, but they face a paradox: theoretically path-independent, yet practically path-dependent. We resolve this by proving that practical training objectives differ from the ideal, ground-truth objective by a crucial, overlooked term: the path variance of the score function. We propose the MinPV (**Min**imum **P**ath **V**ariance) Principle to minimize this path variance. Our key contribution is deriving a closed-form expression for the variance, making optimization tractable. By parameterizing the path with a flexible Kumaraswamy Mixture Model, our method learns data-adaptive, low-variance paths without heuristic manual selection. This principled optimization of the complete objective yields more accurate and stable estimators, establishing new state-of-the-art results on challenging benchmarks and providing a general framework for optimizing score-based interpolation. Subjects: Machine ...