[2604.00473] Phase space integrity in neural network models of Hamiltonian dynamics: A Lagrangian descriptor approach

[2604.00473] Phase space integrity in neural network models of Hamiltonian dynamics: A Lagrangian descriptor approach

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2604.00473: Phase space integrity in neural network models of Hamiltonian dynamics: A Lagrangian descriptor approach

Computer Science > Machine Learning arXiv:2604.00473 (cs) [Submitted on 1 Apr 2026] Title:Phase space integrity in neural network models of Hamiltonian dynamics: A Lagrangian descriptor approach Authors:Abrari Noor Hasmi, Haralampos Hatzikirou, Hadi Susanto View a PDF of the paper titled Phase space integrity in neural network models of Hamiltonian dynamics: A Lagrangian descriptor approach, by Abrari Noor Hasmi and 1 other authors View PDF HTML (experimental) Abstract:We propose Lagrangian Descriptors (LDs) as a diagnostic framework for evaluating neural network models of Hamiltonian systems beyond conventional trajectory-based metrics. Standard error measures quantify short-term predictive accuracy but provide little insight into global geometric structures such as orbits and separatrices. Existing evaluation tools in dissipative systems are inadequate for Hamiltonian dynamics due to fundamental differences in the systems. By constructing probability density functions weighted by LD values, we embed geometric information into a statistical framework suitable for information-theoretic comparison. We benchmark physically constrained architectures (SympNet, HénonNet, Generalized Hamiltonian Neural Networks) against data-driven Reservoir Computing across two canonical systems. For the Duffing oscillator, all models recover the homoclinic orbit geometry with modest data requirements, though their accuracy near critical structures varies. For the three-mode nonlinear Schröding...

Originally published on April 02, 2026. Curated by AI News.

Related Articles

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
Machine Learning

[for hire] Open for contracts – Veteran Data Scientist (AI / ML / OR) focused on delivering real‑world solutions.

Hi Reddit, I've spent 20 years working with data, and I've learned how to crack problems that AI systems struggle with. I've got a knack ...

Reddit - ML Jobs · 1 min ·
Llms

The public needs to control AI-run infrastructure, labor, education, and governance— NOT private actors

A lot of discussion around AI is becoming siloed, and I think that is dangerous. People in AI-focused spaces often talk as if the only qu...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] ICML final justification

Do we get notified if any reviewer put their final justification into their original review comment? submitted by /u/tuejan11 [link] [com...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime