[2602.19967] Unlearning Noise in PINNs: A Selective Pruning Framework for PDE Inverse Problems
Summary
The paper presents P-PINN, a selective pruning framework for enhancing the robustness of physics-informed neural networks (PINNs) against noise in PDE inverse problems, demonstrating significant improvements in accuracy and stability.
Why It Matters
As PDE inverse problems are often sensitive to noise, this research addresses a critical challenge in machine learning applications. The proposed P-PINN framework offers a novel solution to improve the reliability of PINNs, which can have wide-ranging implications in fields like engineering and physics where accurate modeling is essential.
Key Takeaways
- P-PINN selectively prunes neurons influenced by corrupted data to enhance model stability.
- The framework significantly reduces relative error by up to 96.6% compared to baseline PINNs.
- P-PINN integrates a bias-based neuron importance measure for effective pruning.
- The approach allows for lightweight post-processing without complete retraining.
- Numerical experiments validate the framework's effectiveness under noisy conditions.
Computer Science > Machine Learning arXiv:2602.19967 (cs) [Submitted on 23 Feb 2026] Title:Unlearning Noise in PINNs: A Selective Pruning Framework for PDE Inverse Problems Authors:Yongsheng Chen, Yong Chen, Wei Guo, Xinghui Zhong View a PDF of the paper titled Unlearning Noise in PINNs: A Selective Pruning Framework for PDE Inverse Problems, by Yongsheng Chen and 3 other authors View PDF HTML (experimental) Abstract:Physics-informed neural networks (PINNs) provide a promising framework for solving inverse problems governed by partial differential equations (PDEs) by integrating observational data and physical constraints in a unified optimization objective. However, the ill-posed nature of PDE inverse problems makes them highly sensitive to noise. Even a small fraction of corrupted observations can distort internal neural representations, severely impairing accuracy and destabilizing training. Motivated by recent advances in machine unlearning and structured network pruning, we propose P-PINN, a selective pruning framework designed to unlearn the influence of corrupted data in a pretrained PINN. Specifically, starting from a PINN trained on the full dataset, P-PINN evaluates a joint residual--data fidelity indicator, a weighted combination of data misfit and PDE residuals, to partition the training set into reliable and corrupted subsets. Next, we introduce a bias-based neuron importance measure that quantifies directional activation discrepancies between the two subsets,...