[2306.02192] Correcting Auto-Differentiation in Neural-ODE Training

[2306.02192] Correcting Auto-Differentiation in Neural-ODE Training

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2306.02192: Correcting Auto-Differentiation in Neural-ODE Training

Computer Science > Machine Learning arXiv:2306.02192 (cs) [Submitted on 3 Jun 2023 (v1), last revised 27 Mar 2026 (this version, v3)] Title:Correcting Auto-Differentiation in Neural-ODE Training Authors:Yewei Xu, Shi Chen, Qin Li View a PDF of the paper titled Correcting Auto-Differentiation in Neural-ODE Training, by Yewei Xu and 2 other authors View PDF HTML (experimental) Abstract:Does the use of auto-differentiation yield reasonable updates for deep neural networks (DNNs)? Specifically, when DNNs are designed to adhere to neural ODE architectures, can we trust the gradients provided by auto-differentiation? Through mathematical analysis and numerical evidence, we demonstrate that when neural networks employ high-order methods, such as Linear Multistep Methods (LMM) or Explicit Runge-Kutta Methods (ERK), to approximate the underlying ODE flows, brute-force auto-differentiation often introduces artificial oscillations in the gradients that prevent convergence. In the case of Leapfrog and 2-stage ERK, we propose simple post-processing techniques that effectively eliminates these oscillations, correct the gradient computation and thus returns the accurate updates. Comments: Subjects: Machine Learning (cs.LG); Numerical Analysis (math.NA) MSC classes: 65D25 (Primary), 65L06, 90C31 (Secondary) Cite as: arXiv:2306.02192 [cs.LG]   (or arXiv:2306.02192v3 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2306.02192 Focus to learn more arXiv-issued DOI via DataCite Submi...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

How Dangerous Is Anthropic’s New AI Model? Its Chief Science Officer Explains.
Machine Learning

How Dangerous Is Anthropic’s New AI Model? Its Chief Science Officer Explains.

Anthropic says Mythos is so dangerous that the company is slowing its release. We asked Jared Kaplan why.

AI Tools & Products · 3 min ·
Llms

Built an political benchmark for LLMs. KIMI K2 can't answer about Taiwan (Obviously). GPT-5.3 refuses 100% of questions when given an opt-out. [P]

I spent the few days building a benchmark that maps where frontier LLMs fall on a 2D political compass (economic left/right + social prog...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime