[2602.13759] Discrete Double-Bracket Flows for Isotropic-Noise Invariant Eigendecomposition
Summary
This paper presents a novel approach to matrix-free eigendecomposition using discrete double-bracket flows, which are invariant to isotropic noise, enhancing convergence rates and stability in machine learning applications.
Why It Matters
The research addresses a significant challenge in eigendecomposition under noisy conditions, which is crucial for various machine learning algorithms. By introducing a method that guarantees convergence and stability, it has potential implications for improving the efficiency of algorithms in fields such as data science and optimization.
Key Takeaways
- Introduces discrete double-bracket flows for eigendecomposition.
- Achieves pathwise invariance to isotropic noise, enhancing stability.
- Establishes global convergence through strict-saddle geometry.
- Demonstrates accelerated saddle-escape rates for improved efficiency.
- Provides a high-probability finite-time convergence guarantee.
Computer Science > Machine Learning arXiv:2602.13759 (cs) [Submitted on 14 Feb 2026] Title:Discrete Double-Bracket Flows for Isotropic-Noise Invariant Eigendecomposition Authors:ZhiMing Li, JiaHe Feng View a PDF of the paper titled Discrete Double-Bracket Flows for Isotropic-Noise Invariant Eigendecomposition, by ZhiMing Li and 1 other authors View PDF HTML (experimental) Abstract:We study matrix-free eigendecomposition under a matrix-vector product (MVP) oracle, where each step observes a covariance operator $C_k = C_{sig} + \sigma_k^2 I + E_k$. Standard stochastic approximation methods either use fixed steps that couple stability to $\|C_k\|_2$, or adapt steps in ways that slow down due to vanishing updates. We introduce a discrete double-bracket flow whose generator is invariant to isotropic shifts, yielding pathwise invariance to $\sigma_k^2 I$ at the discrete-time level. The resulting trajectory and a maximal stable step size $\eta_{max} \propto 1/\|C_e\|_2^2$ depend only on the trace-free covariance $C_e$. We establish global convergence via strict-saddle geometry for the diagonalization objective and an input-to-state stability analysis, with sample complexity scaling as $O(\|C_e\|_2^2 / (\Delta^2 \epsilon))$ under trace-free perturbations. An explicit characterization of degenerate blocks yields an accelerated $O(\log(1/\zeta))$ saddle-escape rate and a high-probability finite-time convergence guarantee. Comments: Subjects: Machine Learning (cs.LG); Numerical Analy...