[2505.18288] Operator Learning for Schrödinger Equation: Unitarity, Error Bounds, and Time Generalization

[2505.18288] Operator Learning for Schrödinger Equation: Unitarity, Error Bounds, and Time Generalization

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2505.18288: Operator Learning for Schrödinger Equation: Unitarity, Error Bounds, and Time Generalization

Statistics > Machine Learning arXiv:2505.18288 (stat) [Submitted on 23 May 2025 (v1), last revised 3 Apr 2026 (this version, v2)] Title:Operator Learning for Schrödinger Equation: Unitarity, Error Bounds, and Time Generalization Authors:Yash Patel, Unique Subedi, Ambuj Tewari View a PDF of the paper titled Operator Learning for Schr\"{o}dinger Equation: Unitarity, Error Bounds, and Time Generalization, by Yash Patel and 2 other authors View PDF HTML (experimental) Abstract:We consider the problem of learning the evolution operator for the time-dependent Schrödinger equation, where the Hamiltonian may vary with time. Existing neural network-based surrogates often ignore fundamental properties of the Schrödinger equation, such as linearity and unitarity, and lack theoretical guarantees on prediction error or time generalization. To address this, we introduce a linear estimator for the evolution operator that preserves a weak form of unitarity. We establish both upper bounds and lower bounds on the prediction error of the proposed estimator that hold uniformly over classes of sufficiently smooth initial wave functions. Additionally, we derive time generalization bounds that quantify how the estimator extrapolates beyond the time points seen during training. Experiments across real-world Hamiltonians -- including hydrogen atoms, ion traps for qubit design, and optical lattices -- show that our estimator achieves relative errors up to two orders of magnitude smaller than state-...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Things I got wrong building a confidence evaluator for local LLMs [D]

I've been building **Autodidact**, a local-first AI agent framework. The central piece is a **confidence evaluator** - something that dec...

Reddit - Machine Learning · 1 min ·
Llms

I’m convinced 90% of you building "AI Agents" are just burning money on proxy providers. [D]

Seriously, I just audited my stack and realized I’m spending more on rotating residential proxies than I am on the actual Claude and Open...

Reddit - Machine Learning · 1 min ·
Machine Learning

I recently tested Gemma 4-31B locally and I was blown away with the intelligence/size ratio of this model. These papers show how they achieved such distillation capabilities.[R]

The secret sauce here is that the student model does not just try to guess the next token in a sentence, which is how most AI is trained....

Reddit - Machine Learning · 1 min ·
Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime