[2602.02853] Recurrent Equivariant Constraint Modulation: Learning Per-Layer Symmetry Relaxation from Data
Summary
The article presents Recurrent Equivariant Constraint Modulation (RECM), a novel approach for learning layer-wise symmetry relaxation in equivariant neural networks, enhancing their optimization and generalization capabilities.
Why It Matters
This research addresses the challenges of strict equivariance in neural networks, which can complicate learning. By allowing layers to adaptively relax symmetry constraints based on data, RECM improves model performance across various tasks, including molecular generation, making it significant for advancements in machine learning applications.
Key Takeaways
- RECM learns appropriate relaxation levels for network layers from training data, eliminating the need for task-specific tuning.
- The method ensures layers processing symmetric data achieve full equivariance, while those with approximate symmetries can adapt flexibly.
- Empirical results show RECM outperforms existing methods in various equivariant tasks, including complex molecular conformer generation.
Computer Science > Machine Learning arXiv:2602.02853 (cs) [Submitted on 2 Feb 2026 (v1), last revised 23 Feb 2026 (this version, v2)] Title:Recurrent Equivariant Constraint Modulation: Learning Per-Layer Symmetry Relaxation from Data Authors:Stefanos Pertigkiozoglou, Mircea Petrache, Shubhendu Trivedi, Kostas Daniilidis View a PDF of the paper titled Recurrent Equivariant Constraint Modulation: Learning Per-Layer Symmetry Relaxation from Data, by Stefanos Pertigkiozoglou and 3 other authors View PDF HTML (experimental) Abstract:Equivariant neural networks exploit underlying task symmetries to improve generalization, but strict equivariance constraints can induce more complex optimization dynamics that can hinder learning. Prior work addresses these limitations by relaxing strict equivariance during training, but typically relies on prespecified, explicit, or implicit target levels of relaxation for each network layer, which are task-dependent and costly to tune. We propose Recurrent Equivariant Constraint Modulation (RECM), a layer-wise constraint modulation mechanism that learns appropriate relaxation levels solely from the training signal and the symmetry properties of each layer's input-target distribution, without requiring any prior knowledge about the task-dependent target relaxation level. We demonstrate that under the proposed RECM update, the relaxation level of each layer provably converges to a value upper-bounded by its symmetry gap, namely the degree to which ...