[2602.19967] Unlearning Noise in PINNs: A Selective Pruning Framework for PDE Inverse Problems

[2602.19967] Unlearning Noise in PINNs: A Selective Pruning Framework for PDE Inverse Problems

arXiv - Machine Learning 4 min read Article

Summary

The paper presents P-PINN, a selective pruning framework for enhancing the robustness of physics-informed neural networks (PINNs) against noise in PDE inverse problems, demonstrating significant improvements in accuracy and stability.

Why It Matters

As PDE inverse problems are often sensitive to noise, this research addresses a critical challenge in machine learning applications. The proposed P-PINN framework offers a novel solution to improve the reliability of PINNs, which can have wide-ranging implications in fields like engineering and physics where accurate modeling is essential.

Key Takeaways

  • P-PINN selectively prunes neurons influenced by corrupted data to enhance model stability.
  • The framework significantly reduces relative error by up to 96.6% compared to baseline PINNs.
  • P-PINN integrates a bias-based neuron importance measure for effective pruning.
  • The approach allows for lightweight post-processing without complete retraining.
  • Numerical experiments validate the framework's effectiveness under noisy conditions.

Computer Science > Machine Learning arXiv:2602.19967 (cs) [Submitted on 23 Feb 2026] Title:Unlearning Noise in PINNs: A Selective Pruning Framework for PDE Inverse Problems Authors:Yongsheng Chen, Yong Chen, Wei Guo, Xinghui Zhong View a PDF of the paper titled Unlearning Noise in PINNs: A Selective Pruning Framework for PDE Inverse Problems, by Yongsheng Chen and 3 other authors View PDF HTML (experimental) Abstract:Physics-informed neural networks (PINNs) provide a promising framework for solving inverse problems governed by partial differential equations (PDEs) by integrating observational data and physical constraints in a unified optimization objective. However, the ill-posed nature of PDE inverse problems makes them highly sensitive to noise. Even a small fraction of corrupted observations can distort internal neural representations, severely impairing accuracy and destabilizing training. Motivated by recent advances in machine unlearning and structured network pruning, we propose P-PINN, a selective pruning framework designed to unlearn the influence of corrupted data in a pretrained PINN. Specifically, starting from a PINN trained on the full dataset, P-PINN evaluates a joint residual--data fidelity indicator, a weighted combination of data misfit and PDE residuals, to partition the training set into reliable and corrupted subsets. Next, we introduce a bias-based neuron importance measure that quantifies directional activation discrepancies between the two subsets,...

Related Articles

Machine Learning

[D] ICML Rebuttal Question

I am currently working on my response on the rebuttal acknowledgments for ICML and I doubting how to handle the strawman argument of that...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ML researcher looking to switch to a product company.

Hey, I am an AI researcher currently working in a deep tech company as a data scientist. Prior to this, I was doing my PhD. My current ro...

Reddit - Machine Learning · 1 min ·
Machine Learning

Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]

Hey guys, I’m the same creator of Netryx V2, the geolocation tool. I’ve been working on something new called COGNEX. It learns how a pers...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] bitnet-edge: Ternary-weight CNNs ({-1,0,+1}) on MNIST and CIFAR-10, deployed to ESP32-S3 with zero multiplications

I built a pipeline that takes ternary-quantized CNNs from PyTorch training all the way to bare-metal inference on an ESP32-S3 microcontro...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime