[2602.18248] Neural-HSS: Hierarchical Semi-Separable Neural PDE Solver
Summary
The paper presents Neural-HSS, a novel architecture for solving partial differential equations (PDEs) efficiently using deep learning, demonstrating superior data efficiency in low-data regimes.
Why It Matters
Neural-HSS addresses the significant computational challenges in generating large-scale datasets for PDEs, making it relevant for fields like fluid dynamics and electromagnetism. Its efficiency could lead to faster simulations and broader applicability in scientific computing.
Key Takeaways
- Neural-HSS utilizes a Hierarchical Semi-Separable matrix structure for efficient PDE solving.
- The architecture is proven to be data-efficient, even in low-data scenarios.
- Experimental results show superior performance on the three-dimensional Poisson equation.
- Neural-HSS can be applied across various domains, including biology and fluid dynamics.
- The study connects Neural-HSS with existing neural operator architectures.
Computer Science > Machine Learning arXiv:2602.18248 (cs) [Submitted on 20 Feb 2026] Title:Neural-HSS: Hierarchical Semi-Separable Neural PDE Solver Authors:Pietro Sittoni, Emanuele Zangrando, Angelo A. Casulli, Nicola Guglielmi, Francesco Tudisco View a PDF of the paper titled Neural-HSS: Hierarchical Semi-Separable Neural PDE Solver, by Pietro Sittoni and 4 other authors View PDF HTML (experimental) Abstract:Deep learning-based methods have shown remarkable effectiveness in solving PDEs, largely due to their ability to enable fast simulations once trained. However, despite the availability of high-performance computing infrastructure, many critical applications remain constrained by the substantial computational costs associated with generating large-scale, high-quality datasets and training models. In this work, inspired by studies on the structure of Green's functions for elliptic PDEs, we introduce Neural-HSS, a parameter-efficient architecture built upon the Hierarchical Semi-Separable (HSS) matrix structure that is provably data-efficient for a broad class of PDEs. We theoretically analyze the proposed architecture, proving that it satisfies exactness properties even in very low-data regimes. We also investigate its connections with other architectural primitives, such as the Fourier neural operator layer and convolutional layers. We experimentally validate the data efficiency of Neural-HSS on the three-dimensional Poisson equation over a grid of two million points,...