[2501.01696] Guaranteed Nonconvex Low-Rank Tensor Estimation via Scaled Gradient Descent
Summary
This paper presents a novel Scaled Gradient Descent (ScaledGD) algorithm for low-rank tensor estimation, demonstrating linear convergence rates and efficiency in handling corrupted tensor data.
Why It Matters
As machine learning increasingly relies on multi-dimensional data, effective tensor estimation is crucial for applications in signal processing and data analysis. This research provides a robust solution to a common challenge in the field, enhancing the reliability of tensor-based methods.
Key Takeaways
- ScaledGD achieves linear convergence rates independent of the tensor's condition number.
- The algorithm is tailored for applications like tensor completion and regression, enhancing computational efficiency.
- Numerical examples validate the algorithm's effectiveness in real-world scenarios.
Statistics > Machine Learning arXiv:2501.01696 (stat) [Submitted on 3 Jan 2025 (v1), last revised 14 Feb 2026 (this version, v2)] Title:Guaranteed Nonconvex Low-Rank Tensor Estimation via Scaled Gradient Descent Authors:Tong Wu View a PDF of the paper titled Guaranteed Nonconvex Low-Rank Tensor Estimation via Scaled Gradient Descent, by Tong Wu View PDF Abstract:Tensors, which give a faithful and effective representation to deliver the intrinsic structure of multi-dimensional data, play a crucial role in an increasing number of signal processing and machine learning problems. However, tensor data are often accompanied by arbitrary signal corruptions, including missing entries and sparse noise. A fundamental challenge is to reliably extract the meaningful information from corrupted tensor data in a statistically and computationally efficient manner. This paper develops a scaled gradient descent (ScaledGD) algorithm to directly estimate the tensor factors with tailored spectral initializations under the tensor-tensor product (t-product) and tensor singular value decomposition (t-SVD) framework. With tailored variants for tensor robust principal component analysis, (robust) tensor completion and tensor regression, we theoretically show that ScaledGD achieves linear convergence at a constant rate that is independent of the condition number of the ground truth low-rank tensor, while maintaining the low per-iteration cost of gradient descent. To the best of our knowledge, Scaled...