[2602.14663] Pseudo-differential-enhanced physics-informed neural networks

[2602.14663] Pseudo-differential-enhanced physics-informed neural networks

arXiv - Machine Learning 4 min read Article

Summary

This article introduces pseudo-differential-enhanced physics-informed neural networks (PINNs), which improve training efficiency and accuracy in solving partial differential equations (PDEs) by leveraging Fourier transforms.

Why It Matters

The development of pseudo-differential-enhanced PINNs is significant as it addresses common challenges in training neural networks for PDEs, such as frequency bias and the need for high fidelity in learning. This innovation could lead to more efficient computational methods in various scientific and engineering applications.

Key Takeaways

  • Pseudo-differential-enhanced PINNs utilize Fourier transforms for improved training.
  • The method enhances learning fidelity by addressing frequency bias.
  • It is compatible with advanced techniques like Fourier feature embeddings.
  • The approach allows for greater mesh flexibility in numerical analysis.
  • It shows potential for faster convergence in training iterations.

Computer Science > Machine Learning arXiv:2602.14663 (cs) [Submitted on 16 Feb 2026] Title:Pseudo-differential-enhanced physics-informed neural networks Authors:Andrew Gracyk View a PDF of the paper titled Pseudo-differential-enhanced physics-informed neural networks, by Andrew Gracyk View PDF HTML (experimental) Abstract:We present pseudo-differential enhanced physics-informed neural networks (PINNs), an extension of gradient enhancement but in Fourier space. Gradient enhancement of PINNs dictates that the PDE residual is taken to a higher differential order than prescribed by the PDE, added to the objective as an augmented term in order to improve training and overall learning fidelity. We propose the same procedure after application via Fourier transforms, since differentiating in Fourier space is multiplication with the Fourier wavenumber under suitable decay. Our methods are fast and efficient. Our methods oftentimes achieve superior PINN versus numerical error in fewer training iterations, potentially pair well with few samples in collocation, and can on occasion break plateaus in low collocation settings. Moreover, our methods are suitable for fractional derivatives. We establish that our methods improve spectral eigenvalue decay of the neural tangent kernel (NTK), and so our methods contribute towards the learning of high frequencies in early training, mitigating the effects of frequency bias up to the polynomial order and possibly greater with smooth activations. ...

Related Articles

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think | WIRED
Machine Learning

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think | WIRED

The new AI model is being heralded—and feared—as a hacker’s superweapon. Experts say its arrival is a wake-up call for developers who hav...

Wired - AI · 9 min ·
Machine Learning

Is google deepmind known to ghost applicants? [D]

Hey sub, I'm sorry if this is a wrong place to ask but I don't see a sub for ML roles separately. I was wondering if deepmind is known to...

Reddit - Machine Learning · 1 min ·
Llms

OpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought ab...

Reddit - Artificial Intelligence · 1 min ·
Llms

A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence.

SmolLM2 135M. Lenovo T14 CPU. No GPU. No RLHF. No BPE. Coherent, non-sycophantic, contextually appropriate output. First message. No prio...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime