[2603.02002] MatRIS: Toward Reliable and Efficient Pretrained Machine Learning Interaction Potentials
About this article
Abstract page for arXiv paper 2603.02002: MatRIS: Toward Reliable and Efficient Pretrained Machine Learning Interaction Potentials
Computer Science > Machine Learning arXiv:2603.02002 (cs) [Submitted on 2 Mar 2026] Title:MatRIS: Toward Reliable and Efficient Pretrained Machine Learning Interaction Potentials Authors:Yuanchang Zhou, Siyu Hu, Xiangyu Zhang, Hongyu Wang, Guangming Tan, Weile Jia View a PDF of the paper titled MatRIS: Toward Reliable and Efficient Pretrained Machine Learning Interaction Potentials, by Yuanchang Zhou and 5 other authors View PDF HTML (experimental) Abstract:Foundation MLIPs demonstrate broad applicability across diverse material systems and have emerged as a powerful and transformative paradigm in chemical and computational materials science. Equivariant MLIPs achieve state-of-the-art accuracy in a wide range of benchmarks by incorporating equivariant inductive bias. However, the reliance on tensor products and high-degree representations makes them computationally costly. This raises a fundamental question: as quantum mechanical-based datasets continue to expand, can we develop a more compact model to thoroughly exploit high-dimensional atomic interactions? In this work, we present MatRIS (\textbf{Mat}erials \textbf{R}epresentation and \textbf{I}nteraction \textbf{S}imulation), an invariant MLIP that introduces attention-based modeling of three-body interactions. MatRIS leverages a novel separable attention mechanism with linear complexity $O(N)$, enabling both scalability and expressiveness. MatRIS delivers accuracy comparable to that of leading equivariant models on a w...