[2506.06858] FA-INR: Adaptive Implicit Neural Representations for Interpretable Exploration of Simulation Ensembles
About this article
Abstract page for arXiv paper 2506.06858: FA-INR: Adaptive Implicit Neural Representations for Interpretable Exploration of Simulation Ensembles
Computer Science > Machine Learning arXiv:2506.06858 (cs) [Submitted on 7 Jun 2025 (v1), last revised 31 Mar 2026 (this version, v3)] Title:FA-INR: Adaptive Implicit Neural Representations for Interpretable Exploration of Simulation Ensembles Authors:Ziwei Li, Yuhan Duan, Tianyu Xiong, Yi-Tang Chen, Wei-Lun Chao, Han-Wei Shen View a PDF of the paper titled FA-INR: Adaptive Implicit Neural Representations for Interpretable Exploration of Simulation Ensembles, by Ziwei Li and 5 other authors View PDF HTML (experimental) Abstract:Surrogate models are essential for efficient exploration of large-scale ensemble simulations. Implicit neural representations (INRs) provide a compact and continuous framework for modeling spatially structured data, but they often struggle with learning complex localized structures within the scientific fields. Recent INR-based surrogates address this by augmenting INRs with explicit feature structures, but at the cost of flexibility and substantial memory overhead. In this paper, we present Feature-Adaptive INR (FA-INR), an adaptive INR-based surrogate model for high-fidelity and interpretable exploration of ensemble simulations. Instead of relying on structured feature representations, FA-INR leverages cross-attention over a learnable key-value memory bank to allocate model capacity adaptively based on the data characteristics. To further improve scalability, we introduce a coordinate-guided mixture of experts (MoE) framework that enhances both eff...