[2403.04545] Branch Scaling Manifests as Implicit Architectural Regularization for Improving Generalization in Overparameterized ResNets

[2403.04545] Branch Scaling Manifests as Implicit Architectural Regularization for Improving Generalization in Overparameterized ResNets

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2403.04545: Branch Scaling Manifests as Implicit Architectural Regularization for Improving Generalization in Overparameterized ResNets

Computer Science > Machine Learning arXiv:2403.04545 (cs) [Submitted on 7 Mar 2024 (v1), last revised 26 Mar 2026 (this version, v2)] Title:Branch Scaling Manifests as Implicit Architectural Regularization for Improving Generalization in Overparameterized ResNets Authors:Zixiong Yu, Guhan Chen, Jianfa Lai, Bohan Li, Songtao Tian View a PDF of the paper titled Branch Scaling Manifests as Implicit Architectural Regularization for Improving Generalization in Overparameterized ResNets, by Zixiong Yu and 4 other authors View PDF HTML (experimental) Abstract:Scaling factors in residual branches have emerged as a prevalent method for boosting neural network performance, especially in normalization-free architectures. While prior work has primarily examined scaling effects from an optimization perspective, this paper investigates their role in residual architectures through the lens of generalization theory. Specifically, we establish that wide residual networks (ResNets) with constant scaling factors become asymptotically unlearnable as depth increases. In contrast, when the scaling factor exhibits rapid depth-wise decay combined with early stopping, over-parameterized ResNets achieve minimax-optimal generalization rates. To establish this, we demonstrate that the generalization capability of wide ResNets can be approximated by the kernel regression associated with a specific kernel. Our theoretical findings are validated through experiments on synthetic data and real-world class...

Originally published on March 27, 2026. Curated by AI News.

Related Articles

Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
Top 10 AI certifications and courses for 2026
Ai Startups

Top 10 AI certifications and courses for 2026

This article reviews the top 10 AI certifications and courses for 2026, highlighting their significance in a rapidly evolving field and t...

AI Events · 15 min ·
Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments
Machine Learning

Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments

Hub Group says it’s using artificial intelligence and machine learning to leverage data from its GPS-equipped container fleet to give cus...

AI Events · 4 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime