[2602.13174] Learning functional components of PDEs from data using neural networks

[2602.13174] Learning functional components of PDEs from data using neural networks

arXiv - Machine Learning 3 min read Article

Summary

The paper explores how neural networks can be integrated into partial differential equations (PDEs) to recover unknown functions from data, enhancing predictive modeling capabilities.

Why It Matters

This research is significant as it addresses the challenge of measuring unknown functions in PDEs, which is crucial for accurate modeling in various scientific fields. By leveraging neural networks, the study proposes a method that can improve the recovery of these functions, potentially leading to better predictive tools in engineering and physics.

Key Takeaways

  • Neural networks can approximate unknown functions in PDEs with high accuracy.
  • The method allows for the recovery of interaction kernels and external potentials from steady state data.
  • Factors like sampling density and measurement noise significantly impact function recovery success.
  • This approach utilizes standard parameter-fitting workflows, making it accessible for practitioners.
  • The trained PDE can be treated like a normal PDE for generating system predictions.

Computer Science > Machine Learning arXiv:2602.13174 (cs) [Submitted on 13 Feb 2026] Title:Learning functional components of PDEs from data using neural networks Authors:Torkel E. Loman, Yurij Salmaniw, Antonio Leon Villares, Jose A. Carrillo, Ruth E. Baker View a PDF of the paper titled Learning functional components of PDEs from data using neural networks, by Torkel E. Loman and 4 other authors View PDF HTML (experimental) Abstract:Partial differential equations often contain unknown functions that are difficult or impossible to measure directly, hampering our ability to derive predictions from the model. Workflows for recovering scalar PDE parameters from data are well studied: here we show how similar workflows can be used to recover functions from data. Specifically, we embed neural networks into the PDE and show how, as they are trained on data, they can approximate unknown functions with arbitrary accuracy. Using nonlocal aggregation-diffusion equations as a case study, we recover interaction kernels and external potentials from steady state data. Specifically, we investigate how a wide range of factors, such as the number of available solutions, their properties, sampling density, and measurement noise, affect our ability to successfully recover functions. Our approach is advantageous because it can utilise standard parameter-fitting workflows, and in that the trained PDE can be treated as a normal PDE for purposes such as generating system predictions. Comments: S...

Related Articles

Machine Learning

Can I trick a public AI to spit out an outcome I prefer?

I am aware of an organization that evaluates proposals by feeding them into a public version of AI. Is there a way to make that AI rate m...

Reddit - Artificial Intelligence · 1 min ·
Llms

Curated 550+ free AI tools useful for building projects (LLMs, APIs, local models, RAG, agents)

Over the last few days I was collecting free or low cost AI tools that are actually useful if you want to build stuff, not just try rando...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Artificial intelligence - Machine Learning, Robotics, Algorithms

AI Events ·
Machine Learning

Fed Chair Jerome Powell, Treasury's Bessent and top bank CEOs met over Anthropic's Mythos model

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime