[2602.15155] Refine Now, Query Fast: A Decoupled Refinement Paradigm for Implicit Neural Fields
Summary
The paper presents a Decoupled Representation Refinement (DRR) paradigm for Implicit Neural Representations (INRs), enhancing speed and fidelity in 3D simulations by separating the refinement process from fast inference.
Why It Matters
This research addresses the critical trade-off between inference speed and model fidelity in machine learning applications, particularly in scientific simulations. By introducing DRR, the authors provide a novel approach that could significantly improve the efficiency and applicability of INRs in various fields, making it a valuable contribution to the ongoing advancements in machine learning and computational engineering.
Key Takeaways
- The DRR paradigm decouples high-capacity neural networks from fast inference paths.
- DRR-Net achieves state-of-the-art fidelity while being up to 27 times faster than traditional high-fidelity models.
- The introduction of Variational Pairs (VP) enhances INRs for complex tasks.
- This approach offers a practical solution for building efficient neural field surrogates.
- The findings have implications for various applications in computational engineering and AI.
Computer Science > Machine Learning arXiv:2602.15155 (cs) [Submitted on 16 Feb 2026] Title:Refine Now, Query Fast: A Decoupled Refinement Paradigm for Implicit Neural Fields Authors:Tianyu Xiong, Skylar Wurster, Han-Wei Shen View a PDF of the paper titled Refine Now, Query Fast: A Decoupled Refinement Paradigm for Implicit Neural Fields, by Tianyu Xiong and 1 other authors View PDF HTML (experimental) Abstract:Implicit Neural Representations (INRs) have emerged as promising surrogates for large 3D scientific simulations due to their ability to continuously model spatial and conditional fields, yet they face a critical fidelity-speed dilemma: deep MLPs suffer from high inference cost, while efficient embedding-based models lack sufficient expressiveness. To resolve this, we propose the Decoupled Representation Refinement (DRR) architectural paradigm. DRR leverages a deep refiner network, alongside non-parametric transformations, in a one-time offline process to encode rich representations into a compact and efficient embedding structure. This approach decouples slow neural networks with high representational capacity from the fast inference path. We introduce DRR-Net, a simple network that validates this paradigm, and a novel data augmentation strategy, Variational Pairs (VP) for improving INRs under complex tasks like high-dimensional surrogate modeling. Experiments on several ensemble simulation datasets demonstrate that our approach achieves state-of-the-art fidelity, wh...