[2602.13910] Sufficient Conditions for Stability of Minimum-Norm Interpolating Deep ReLU Networks

[2602.13910] Sufficient Conditions for Stability of Minimum-Norm Interpolating Deep ReLU Networks

arXiv - AI 4 min read Article

Summary

This paper explores the stability of minimum-norm interpolating deep ReLU networks, identifying conditions under which these networks maintain stability despite perturbations in training data.

Why It Matters

Understanding the stability of deep learning models is crucial for ensuring their reliability and generalization capabilities. This research provides insights into how network architecture influences stability, which can guide future model design and training practices in machine learning.

Key Takeaways

  • Minimum-norm interpolating networks can achieve stability under specific conditions.
  • The presence of a stable sub-network is essential for overall network stability.
  • Low-rank weight matrices in subsequent layers contribute to stability.
  • The findings challenge assumptions about stability in deep neural networks.
  • This research could influence future developments in overparameterized models.

Computer Science > Machine Learning arXiv:2602.13910 (cs) [Submitted on 14 Feb 2026] Title:Sufficient Conditions for Stability of Minimum-Norm Interpolating Deep ReLU Networks Authors:Ouns El Harzli, Yoonsoo Nam, Ilja Kuzborskij, Bernardo Cuenca Grau, Ard A. Louis View a PDF of the paper titled Sufficient Conditions for Stability of Minimum-Norm Interpolating Deep ReLU Networks, by Ouns El Harzli and 4 other authors View PDF Abstract:Algorithmic stability is a classical framework for analyzing the generalization error of learning algorithms. It predicts that an algorithm has small generalization error if it is insensitive to small perturbations in the training set such as the removal or replacement of a training point. While stability has been demonstrated for numerous well-known algorithms, this framework has had limited success in analyses of deep neural networks. In this paper we study the algorithmic stability of deep ReLU homogeneous neural networks that achieve zero training error using parameters with the smallest $L_2$ norm, also known as the minimum-norm interpolation, a phenomenon that can be observed in overparameterized models trained by gradient-based algorithms. We investigate sufficient conditions for such networks to be stable. We find that 1) such networks are stable when they contain a (possibly small) stable sub-network, followed by a layer with a low-rank weight matrix, and 2) such networks are not guaranteed to be stable even when they contain a stable...

Related Articles

Llms

[R] The Lyra Technique — A framework for interpreting internal cognitive states in LLMs (Zenodo, open access)

We're releasing a paper on a new framework for reading and interpreting the internal cognitive states of large language models: "The Lyra...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] citracer: a small CLI tool to trace where a concept comes from in a citation graph

Hi all, I made a small tool that I've been using for my own literature reviews and figured I'd share in case it's useful to anyone else. ...

Reddit - Machine Learning · 1 min ·
Llms

Looking to build a production-level AI/ML project (agentic systems), need guidance on what to build

Hi everyone, I’m a final-year undergraduate AI/ML student currently focusing on applied AI / agentic systems. So far, I’ve spent time und...

Reddit - ML Jobs · 1 min ·
Meta is reentering the AI race with a new model called Muse Spark | The Verge
Machine Learning

Meta is reentering the AI race with a new model called Muse Spark | The Verge

Meta Superintelligence Labs has unveiled a new AI model called Muse Spark that will soon roll out across apps like Instagram and Facebook.

The Verge - AI · 5 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime