[2505.11312] Where You Place the Norm Matters: From Prejudiced to Neutral Initializations

[2505.11312] Where You Place the Norm Matters: From Prejudiced to Neutral Initializations

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2505.11312: Where You Place the Norm Matters: From Prejudiced to Neutral Initializations

Computer Science > Machine Learning arXiv:2505.11312 (cs) [Submitted on 16 May 2025 (v1), last revised 2 Apr 2026 (this version, v4)] Title:Where You Place the Norm Matters: From Prejudiced to Neutral Initializations Authors:Emanuele Francazi, Francesco Pinto, Aurelien Lucchi, Marco Baity-Jesi View a PDF of the paper titled Where You Place the Norm Matters: From Prejudiced to Neutral Initializations, by Emanuele Francazi and 3 other authors View PDF HTML (experimental) Abstract:Normalization layers were introduced to stabilize and accelerate training, yet their influence is critical already at initialization, where they shape signal propagation and output statistics before parameters adapt to data. In practice, both which normalization to use and where to place it are often chosen heuristically, despite the fact that these decisions can qualitatively alter a model's behavior. We provide a theoretical characterization of how normalization choice and placement (Pre-Norm vs. Post-Norm) determine the distribution of class predictions at initialization, ranging from unbiased (Neutral) to highly concentrated (Prejudiced) regimes. We show that these architectural decisions induce systematic shifts in the initial prediction regime, thereby modulating subsequent learning dynamics. By linking normalization design directly to prediction statistics at initialization, our results offer principled guidance for more controlled and interpretable network design, including clarifying how wi...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED
Machine Learning

5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED

The cyber capabilities of AI models have experts rattled. AI’s social skills may be just as dangerous.

Wired - AI · 8 min ·
Machine Learning

“AI engineers” today are just prompt engineers with better branding?

Hot take: A lot of what’s being called “AI engineering” right now feels like: prompt tweaking chaining APIs adding retries/guardrails Not...

Reddit - Artificial Intelligence · 1 min ·
Anthropic’s Mythos rollout has missed America’s cybersecurity agency | The Verge
Machine Learning

Anthropic’s Mythos rollout has missed America’s cybersecurity agency | The Verge

The Cybersecurity and Infrastructure Security Agency (CISA) doesn’t have access to Anthropic’s Mythos Preview, Axios reported.

The Verge - AI · 5 min ·
Machine Learning

How do you anonymize code for a conference submission? [D]

Hi everyone, I have a question about anonymizing code for conference submissions. I’m submitting an AI/ML paper to a conference and would...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime