[2510.00027] Learning Inter-Atomic Potentials without Explicit Equivariance
About this article
Abstract page for arXiv paper 2510.00027: Learning Inter-Atomic Potentials without Explicit Equivariance
Computer Science > Machine Learning arXiv:2510.00027 (cs) [Submitted on 25 Sep 2025 (v1), last revised 31 Mar 2026 (this version, v3)] Title:Learning Inter-Atomic Potentials without Explicit Equivariance Authors:Ahmed A. Elhag, Arun Raja, Alex Morehead, Samuel M. Blau, Hongtao Zhao, Christian Tyrchan, Eva Nittinger, Garrett M. Morris, Michael M. Bronstein View a PDF of the paper titled Learning Inter-Atomic Potentials without Explicit Equivariance, by Ahmed A. Elhag and Arun Raja and Alex Morehead and Samuel M. Blau and Hongtao Zhao and Christian Tyrchan and Eva Nittinger and Garrett M. Morris and Michael M. Bronstein View PDF HTML (experimental) Abstract:Accurate and scalable machine-learned inter-atomic potentials (MLIPs) are essential for molecular simulations ranging from drug discovery to new material design. Current state-of-the-art models enforce roto-translational symmetries through equivariant neural network architectures, a hard-wired inductive bias that can often lead to reduced flexibility, computational efficiency, and scalability. In this work, we introduce TransIP: Transformer-based Inter-Atomic Potentials, a novel training paradigm for interatomic potentials achieving symmetry compliance without explicit architectural constraints. Our approach guides a generic non-equivariant Transformer-based model to learn SO(3)-equivariance by optimizing its representations in the embedding space. Trained on the recent Open Molecules (OMol25) collection, a large and dive...