[2602.21910] The Error of Deep Operator Networks Is the Sum of Its Parts: Branch-Trunk and Mode Error Decompositions

[2602.21910] The Error of Deep Operator Networks Is the Sum of Its Parts: Branch-Trunk and Mode Error Decompositions

arXiv - Machine Learning 4 min read Article

Summary

This paper analyzes the performance limitations of Deep Operator Networks (DeepONets) and proposes modifications to improve their accuracy and generalization in scientific computing tasks.

Why It Matters

Understanding the errors in DeepONets is crucial for enhancing their adoption in scientific computing. This research provides insights into how to optimize these networks, potentially accelerating tasks like design optimization and uncertainty quantification, which are vital in various engineering and scientific applications.

Key Takeaways

  • The approximation error in DeepONets is primarily influenced by the branch network when the internal dimension is large.
  • Replacing the trunk network with classical basis functions does not significantly affect performance.
  • A shared branch network improves generalization for small modes compared to a stacked architecture.
  • A spectral bias is observed, favoring low-frequency modes in the branch network.
  • Coupling between modes in parameter space can negatively impact performance.

Computer Science > Machine Learning arXiv:2602.21910 (cs) [Submitted on 25 Feb 2026] Title:The Error of Deep Operator Networks Is the Sum of Its Parts: Branch-Trunk and Mode Error Decompositions Authors:Alexander Heinlein, Johannes Taraz View a PDF of the paper titled The Error of Deep Operator Networks Is the Sum of Its Parts: Branch-Trunk and Mode Error Decompositions, by Alexander Heinlein and Johannes Taraz View PDF HTML (experimental) Abstract:Operator learning has the potential to strongly impact scientific computing by learning solution operators for differential equations, potentially accelerating multi-query tasks such as design optimization and uncertainty quantification by orders of magnitude. Despite proven universal approximation properties, deep operator networks (DeepONets) often exhibit limited accuracy and generalization in practice, which hinders their adoption. Understanding these limitations is therefore crucial for further advancing the approach. This work analyzes performance limitations of the classical DeepONet architecture. It is shown that the approximation error is dominated by the branch network when the internal dimension is sufficiently large, and that the learned trunk basis can often be replaced by classical basis functions without a significant impact on performance. To investigate this further, a modified DeepONet is constructed in which the trunk network is replaced by the left singular vectors of the training solution matrix. This modifi...

Related Articles

Llms

LLM agents can trigger real actions now. But what actually stops them from executing?

We ran into a simple but important issue while building agents with tool calling: the model can propose actions but nothing actually enfo...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

OkCupid gave 3 million dating-app photos to facial recognition firm, FTC says

submitted by /u/Mathemodel [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

Are LLMs a Dead End? (Investors Just Bet $1 Billion on “Yes”)

| AI Reality Check | Cal Newport Chapters 0:00 What is Yan LeCun Up To? 14:55 How is it possible that LeCun could be right about LLM’s be...

Reddit - Artificial Intelligence · 1 min ·
20+ Best AI Project Ideas for 2026: Trending AI Projects
Ai Startups

20+ Best AI Project Ideas for 2026: Trending AI Projects

This article presents over 20 AI project ideas tailored for various skill levels, providing a roadmap for building portfolio-ready projec...

AI Events ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime