[2603.04703] Implicit Bias and Loss of Plasticity in Matrix Completion: Depth Promotes Low-Rankness
About this article
Abstract page for arXiv paper 2603.04703: Implicit Bias and Loss of Plasticity in Matrix Completion: Depth Promotes Low-Rankness
Computer Science > Machine Learning arXiv:2603.04703 (cs) [Submitted on 5 Mar 2026] Title:Implicit Bias and Loss of Plasticity in Matrix Completion: Depth Promotes Low-Rankness Authors:Baekrok Shin, Chulhee Yun View a PDF of the paper titled Implicit Bias and Loss of Plasticity in Matrix Completion: Depth Promotes Low-Rankness, by Baekrok Shin and 1 other authors View PDF Abstract:We study matrix completion via deep matrix factorization (a.k.a. deep linear neural networks) as a simplified testbed to examine how network depth influences training dynamics. Despite the simplicity and importance of the problem, prior theory largely focuses on shallow (depth-2) models and does not fully explain the implicit low-rank bias observed in deeper networks. We identify coupled dynamics as a key mechanism behind this bias and show that it intensifies with increasing depth. Focusing on gradient flow under block-diagonal observations, we prove: (a) networks of depth $\geq 3$ exhibit coupling unless initialized diagonally, and (b) convergence to rank-1 occurs if and only if the dynamics is coupled -- resolving an open question by Menon (2024) for a family of initializations. We also revisit the loss of plasticity phenomenon in matrix completion (Kleinman et al., 2024), where pre-training on few observations and resuming with more degrades performance. We show that deep models avoid plasticity loss due to their low-rank bias, whereas depth-2 networks pre-trained under decoupled dynamics fai...