[2602.12384] Why Deep Jacobian Spectra Separate: Depth-Induced Scaling and Singular-Vector Alignment

[2602.12384] Why Deep Jacobian Spectra Separate: Depth-Induced Scaling and Singular-Vector Alignment

arXiv - AI 4 min read Article

Summary

This paper explores the mechanisms behind the implicit bias in gradient-based training of deep networks, focusing on the scaling and alignment of singular values in deep Jacobians.

Why It Matters

Understanding the dynamics of deep Jacobians is crucial for improving deep learning models. This research provides insights into how depth influences model behavior, which can lead to better training techniques and model architectures in machine learning.

Key Takeaways

  • Depth-induced scaling of singular values affects training dynamics.
  • Strong spectral separation leads to singular-vector alignment in Jacobians.
  • The study proposes a new approximation regime for understanding singular-value dynamics.

Computer Science > Machine Learning arXiv:2602.12384 (cs) [Submitted on 12 Feb 2026] Title:Why Deep Jacobian Spectra Separate: Depth-Induced Scaling and Singular-Vector Alignment Authors:Nathanaël Haas, Francçois Gatine, Augustin M Cosse, Zied Bouraoui View a PDF of the paper titled Why Deep Jacobian Spectra Separate: Depth-Induced Scaling and Singular-Vector Alignment, by Nathana\"el Haas and 3 other authors View PDF HTML (experimental) Abstract:Understanding why gradient-based training in deep networks exhibits strong implicit bias remains challenging, in part because tractable singular-value dynamics are typically available only for balanced deep linear models. We propose an alternative route based on two theoretically grounded and empirically testable signatures of deep Jacobians: depth-induced exponential scaling of ordered singular values and strong spectral separation. Adopting a fixed-gates view of piecewise-linear networks, where Jacobians reduce to products of masked linear maps within a single activation region, we prove the existence of Lyapunov exponents governing the top singular values at initialization, give closed-form expressions in a tractable masked model, and quantify finite-depth corrections. We further show that sufficiently strong separation forces singular-vector alignment in matrix products, yielding an approximately shared singular basis for intermediate Jacobians. Together, these results motivate an approximation regime in which singular-value d...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts
Machine Learning

Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts

AI News - General · 2 min ·
Interpretable machine learning model advances analysis of complex genetic traits
Machine Learning

Interpretable machine learning model advances analysis of complex genetic traits

AI News - General · 6 min ·
Why AI Is Training on Its Own Garbage (and How to Fix It)
Machine Learning

Why AI Is Training on Its Own Garbage (and How to Fix It)

AI News - General · 8 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime