[2509.18001] Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise

[2509.18001] Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2509.18001: Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise

Computer Science > Machine Learning arXiv:2509.18001 (cs) [Submitted on 22 Sep 2025 (v1), last revised 2 Apr 2026 (this version, v5)] Title:Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise Authors:Haocheng Luo, Mehrtash Harandi, Dinh Phung, Trung Le View a PDF of the paper titled Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise, by Haocheng Luo and 3 other authors View PDF HTML (experimental) Abstract:Sharpness-aware minimization (SAM) has emerged as a highly effective technique to improve model generalization, but its underlying principles are not fully understood. We investigate m-sharpness, where SAM performance improves monotonically as the micro-batch size for computing perturbations decreases, a phenomenon critical for distributed training yet lacking rigorous explanation. We leverage an extended Stochastic Differential Equation (SDE) framework and analyze stochastic gradient noise (SGN) to characterize the dynamics of SAM variants, including n-SAM and m-SAM. Our analysis reveals that stochastic perturbations induce an implicit variance-based sharpness regularization whose strength increases as m decreases. Motivated by this insight, we propose Reweighted SAM (RW-SAM), which employs sharpness-weighted sampling to mimic the generalization benefits of m-SAM while remaining parallelizable. Comprehensive experiments validate our theory and this http URL is available at this https URL. Comments: Subjects: Machine Learning (c...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED
Machine Learning

5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED

The cyber capabilities of AI models have experts rattled. AI’s social skills may be just as dangerous.

Wired - AI · 8 min ·
Machine Learning

“AI engineers” today are just prompt engineers with better branding?

Hot take: A lot of what’s being called “AI engineering” right now feels like: prompt tweaking chaining APIs adding retries/guardrails Not...

Reddit - Artificial Intelligence · 1 min ·
Anthropic’s Mythos rollout has missed America’s cybersecurity agency | The Verge
Machine Learning

Anthropic’s Mythos rollout has missed America’s cybersecurity agency | The Verge

The Cybersecurity and Infrastructure Security Agency (CISA) doesn’t have access to Anthropic’s Mythos Preview, Axios reported.

The Verge - AI · 5 min ·
Machine Learning

How do you anonymize code for a conference submission? [D]

Hi everyone, I have a question about anonymizing code for conference submissions. I’m submitting an AI/ML paper to a conference and would...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime