[2602.14729] Scale redundancy and soft gauge fixing in positively homogeneous neural networks

[2602.14729] Scale redundancy and soft gauge fixing in positively homogeneous neural networks

arXiv - AI 3 min read Article

Summary

This paper explores the concept of scale redundancy in positively homogeneous neural networks, introducing gauge-adapted coordinates and a soft orbit-selection functional to enhance learning stability and suppress scale drift.

Why It Matters

Understanding scale redundancy and its implications for neural network optimization can lead to more robust machine learning models. This research connects gauge theory with machine learning, potentially influencing future developments in AI training techniques.

Key Takeaways

  • Positively homogeneous neural networks exhibit continuous reparametrization symmetry.
  • Gauge-adapted coordinates can effectively separate invariant and scale-imbalance directions.
  • A soft orbit-selection functional improves learning stability without affecting model expressivity.
  • The study establishes a link between gauge-theoretic concepts and optimization in machine learning.
  • Controlled experiments validate the proposed methods, expanding the stable learning-rate regime.

Computer Science > Machine Learning arXiv:2602.14729 (cs) [Submitted on 16 Feb 2026] Title:Scale redundancy and soft gauge fixing in positively homogeneous neural networks Authors:Rodrigo Carmo Terin View a PDF of the paper titled Scale redundancy and soft gauge fixing in positively homogeneous neural networks, by Rodrigo Carmo Terin View PDF HTML (experimental) Abstract:Neural networks with positively homogeneous activations exhibit an exact continuous reparametrization symmetry: neuron-wise rescalings generate parameter-space orbits along which the input--output function is invariant. We interpret this symmetry as a gauge redundancy and introduce gauge-adapted coordinates that separate invariant and scale-imbalance directions. Inspired by gauge fixing in field theory, we introduce a soft orbit-selection (norm-balancing) functional acting only on redundant scale coordinates. We show analytically that it induces dissipative relaxation of imbalance modes to preserve the realized function. In controlled experiments, this orbit-selection penalty expands the stable learning-rate regime and suppresses scale drift without changing expressivity. These results establish a structural link between gauge-orbit geometry and optimization conditioning, providing a concrete connection between gauge-theoretic concepts and machine learning. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.14729 [cs.LG]   (or arXiv:2602.14729v1 [cs.LG] for th...

Related Articles

Machine Learning

Why would Anthropic keep a cyber model like Project Glasswing invite-only?

Anthropic’s Project Glasswing caught my attention less as a cybersecurity headline than as a signal about how frontier AI may be commerci...

Reddit - Artificial Intelligence · 1 min ·
Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
Anthropic limits Mythos AI rollout over fears hackers could use model for cyberattacks
Machine Learning

Anthropic limits Mythos AI rollout over fears hackers could use model for cyberattacks

AI Tools & Products · 5 min ·
Anthropic’s latest AI model could let hackers carry out attacks faster than ever. It wants companies to put up defenses first
Machine Learning

Anthropic’s latest AI model could let hackers carry out attacks faster than ever. It wants companies to put up defenses first

AI Tools & Products · 5 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime