[2602.14663] Pseudo-differential-enhanced physics-informed neural networks
Summary
This article introduces pseudo-differential-enhanced physics-informed neural networks (PINNs), which improve training efficiency and accuracy in solving partial differential equations (PDEs) by leveraging Fourier transforms.
Why It Matters
The development of pseudo-differential-enhanced PINNs is significant as it addresses common challenges in training neural networks for PDEs, such as frequency bias and the need for high fidelity in learning. This innovation could lead to more efficient computational methods in various scientific and engineering applications.
Key Takeaways
- Pseudo-differential-enhanced PINNs utilize Fourier transforms for improved training.
- The method enhances learning fidelity by addressing frequency bias.
- It is compatible with advanced techniques like Fourier feature embeddings.
- The approach allows for greater mesh flexibility in numerical analysis.
- It shows potential for faster convergence in training iterations.
Computer Science > Machine Learning arXiv:2602.14663 (cs) [Submitted on 16 Feb 2026] Title:Pseudo-differential-enhanced physics-informed neural networks Authors:Andrew Gracyk View a PDF of the paper titled Pseudo-differential-enhanced physics-informed neural networks, by Andrew Gracyk View PDF HTML (experimental) Abstract:We present pseudo-differential enhanced physics-informed neural networks (PINNs), an extension of gradient enhancement but in Fourier space. Gradient enhancement of PINNs dictates that the PDE residual is taken to a higher differential order than prescribed by the PDE, added to the objective as an augmented term in order to improve training and overall learning fidelity. We propose the same procedure after application via Fourier transforms, since differentiating in Fourier space is multiplication with the Fourier wavenumber under suitable decay. Our methods are fast and efficient. Our methods oftentimes achieve superior PINN versus numerical error in fewer training iterations, potentially pair well with few samples in collocation, and can on occasion break plateaus in low collocation settings. Moreover, our methods are suitable for fractional derivatives. We establish that our methods improve spectral eigenvalue decay of the neural tangent kernel (NTK), and so our methods contribute towards the learning of high frequencies in early training, mitigating the effects of frequency bias up to the polynomial order and possibly greater with smooth activations. ...