[2604.05414] Training Without Orthogonalization, Inference With SVD: A Gradient Analysis of Rotation Representations
About this article
Abstract page for arXiv paper 2604.05414: Training Without Orthogonalization, Inference With SVD: A Gradient Analysis of Rotation Representations
Computer Science > Machine Learning arXiv:2604.05414 (cs) [Submitted on 7 Apr 2026] Title:Training Without Orthogonalization, Inference With SVD: A Gradient Analysis of Rotation Representations Authors:Chris Choy View a PDF of the paper titled Training Without Orthogonalization, Inference With SVD: A Gradient Analysis of Rotation Representations, by Chris Choy View PDF HTML (experimental) Abstract:Recent work has shown that removing orthogonalization during training and applying it only at inference improves rotation estimation in deep learning, with empirical evidence favoring 9D representations with SVD projection. However, the theoretical understanding of why SVD orthogonalization specifically harms training, and why it should be preferred over Gram-Schmidt at inference, remains incomplete. We provide a detailed gradient analysis of SVD orthogonalization specialized to $3 \times 3$ matrices and $SO(3)$ projection. Our central result derives the exact spectrum of the SVD backward pass Jacobian: it has rank $3$ (matching the dimension of $SO(3)$) with nonzero singular values $2/(s_i + s_j)$ and condition number $\kappa = (s_1 + s_2)/(s_2 + s_3)$, creating quantifiable gradient distortion that is most severe when the predicted matrix is far from $SO(3)$ (e.g., early in training when $s_3 \approx 0$). We further show that even stabilized SVD gradients introduce gradient direction error, whereas removing SVD from the training loop avoids this tradeoff entirely. We also prove...