[2510.06646] The False Promise of Zero-Shot Super-Resolution in Machine-Learned Operators
About this article
Abstract page for arXiv paper 2510.06646: The False Promise of Zero-Shot Super-Resolution in Machine-Learned Operators
Computer Science > Machine Learning arXiv:2510.06646 (cs) [Submitted on 8 Oct 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:The False Promise of Zero-Shot Super-Resolution in Machine-Learned Operators Authors:Mansi Sakarvadia, Kareem Hegazy, Amin Totounferoush, Kyle Chard, Yaoqing Yang, Ian Foster, Michael W. Mahoney View a PDF of the paper titled The False Promise of Zero-Shot Super-Resolution in Machine-Learned Operators, by Mansi Sakarvadia and 6 other authors View PDF HTML (experimental) Abstract:A core challenge in scientific machine learning, and scientific computing more generally, is modeling continuous phenomena which (in practice) are represented discretely. Machine-learned operators (MLOs) have been introduced as a means to achieve this modeling goal, as this class of architecture can perform inference at arbitrary resolution. In this work, we evaluate whether this architectural innovation is sufficient to perform "zero-shot super-resolution," namely to enable a model to serve inference on higher-resolution data than that on which it was originally trained. We comprehensively evaluate both zero-shot sub-resolution and super-resolution (i.e., multi-resolution) inference in MLOs. We decouple multi-resolution inference into two key behaviors: 1) extrapolation to varying frequency information; and 2) interpolating across varying resolutions. We empirically demonstrate that MLOs fail to do both of these tasks in a zero-shot manner. Consequently, we fi...