[2604.01961] Generalization Bounds and Statistical Guarantees for Multi-Task and Multiple Operator Learning with MNO Networks
About this article
Abstract page for arXiv paper 2604.01961: Generalization Bounds and Statistical Guarantees for Multi-Task and Multiple Operator Learning with MNO Networks
Computer Science > Machine Learning arXiv:2604.01961 (cs) [Submitted on 2 Apr 2026] Title:Generalization Bounds and Statistical Guarantees for Multi-Task and Multiple Operator Learning with MNO Networks Authors:Adrien Weihs, Hayden Schaeffer View a PDF of the paper titled Generalization Bounds and Statistical Guarantees for Multi-Task and Multiple Operator Learning with MNO Networks, by Adrien Weihs and Hayden Schaeffer View PDF Abstract:Multiple operator learning concerns learning operator families $\{G[\alpha]:U\to V\}_{\alpha\in W}$ indexed by an operator descriptor $\alpha$. Training data are collected hierarchically by sampling operator instances $\alpha$, then input functions $u$ per instance, and finally evaluation points $x$ per input, yielding noisy observations of $G[\alpha][u](x)$. While recent work has developed expressive multi-task and multiple operator learning architectures and approximation-theoretic scaling laws, quantitative statistical generalization guarantees remain limited. We provide a covering-number-based generalization analysis for separable models, focusing on the Multiple Neural Operator (MNO) architecture: we first derive explicit metric-entropy bounds for hypothesis classes given by linear combinations of products of deep ReLU subnetworks, and then combine these complexity bounds with approximation guarantees for MNO to obtain an explicit approximation-estimation tradeoff for the expected test error on new (unseen) triples $(\alpha,u,x)$. The...