[2503.10404] Architecture-Aware Minimization (A$^2$M): How to Find Flat Minima in Neural Architecture Search

[2503.10404] Architecture-Aware Minimization (A$^2$M): How to Find Flat Minima in Neural Architecture Search

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2503.10404: Architecture-Aware Minimization (A$^2$M): How to Find Flat Minima in Neural Architecture Search

Computer Science > Machine Learning arXiv:2503.10404 (cs) [Submitted on 13 Mar 2025 (v1), last revised 24 Mar 2026 (this version, v3)] Title:Architecture-Aware Minimization (A$^2$M): How to Find Flat Minima in Neural Architecture Search Authors:Matteo Gambella, Fabrizio Pittorino, Manuel Roveri View a PDF of the paper titled Architecture-Aware Minimization (A$^2$M): How to Find Flat Minima in Neural Architecture Search, by Matteo Gambella and 2 other authors View PDF HTML (experimental) Abstract:Neural Architecture Search (NAS) has become an essential tool for designing effective and efficient neural networks. In this paper, we investigate the geometric properties of neural architecture spaces commonly used in differentiable NAS methods, specifically NAS-Bench-201 and DARTS. By defining flatness metrics such as neighborhoods and loss barriers along paths in architecture space, we reveal locality and flatness characteristics analogous to the well-known properties of neural network loss landscapes in weight space. In particular, we find that highly accurate architectures cluster together in flat regions, while suboptimal architectures remain isolated, unveiling the detailed geometrical structure of the architecture search landscape. Building on these insights, we propose Architecture-Aware Minimization (A$^2$M), a novel analytically derived algorithmic framework that explicitly biases, for the first time, the gradient of differentiable NAS methods towards flat minima in arch...

Originally published on March 25, 2026. Curated by AI News.

Related Articles

Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] Architecture Determines Optimization: Deriving Weight Updates from Network Topology (seeking arXiv endorsement - cs.LG)

Abstract: We derive neural network weight updates from first principles without assuming gradient descent or a specific loss function. St...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] ML project (XGBoost + Databricks + MLflow) — how to talk about “production issues” in interviews?

Hey all, I recently built an end-to-end fraud detection project using a large banking dataset: Trained an XGBoost model Used Databricks f...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] The memory chip market lost tens of billions over a paper this community would have understood in 10 minutes

TurboQuant was teased recently and tens of billions gone from memory chip market in 48 hours but anyone in this community who read the pa...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime