[2602.14737] Parameter-Minimal Neural DE Solvers via Horner Polynomials

[2602.14737] Parameter-Minimal Neural DE Solvers via Horner Polynomials

arXiv - Machine Learning 3 min read Article

Summary

The paper presents a novel neural architecture for solving differential equations using Horner polynomials, emphasizing minimal parameters while maintaining accuracy.

Why It Matters

This research is significant as it addresses the challenge of efficiently modeling differential equations with neural networks. By minimizing parameters, it offers a resource-efficient solution that can enhance scientific computing and machine learning applications.

Key Takeaways

  • Introduces a parameter-minimal neural architecture for differential equations.
  • Utilizes Horner-factorized polynomials for efficient computation.
  • Enforces initial conditions directly in the model design.
  • Demonstrates improved accuracy over traditional models with fewer parameters.
  • Offers a practical approach for scientific modeling in resource-constrained environments.

Computer Science > Machine Learning arXiv:2602.14737 (cs) [Submitted on 16 Feb 2026] Title:Parameter-Minimal Neural DE Solvers via Horner Polynomials Authors:T. Matulić, D. Seršić View a PDF of the paper titled Parameter-Minimal Neural DE Solvers via Horner Polynomials, by T. Matuli\'c and D. Ser\v{s}i\'c View PDF HTML (experimental) Abstract:We propose a parameter-minimal neural architecture for solving differential equations by restricting the hypothesis class to Horner-factorized polynomials, yielding an implicit, differentiable trial solution with only a small set of learnable coefficients. Initial conditions are enforced exactly by construction by fixing the low-order polynomial degrees of freedom, so training focuses solely on matching the differential-equation residual at collocation points. To reduce approximation error without abandoning the low-parameter regime, we introduce a piecewise ("spline-like") extension that trains multiple small Horner models on subintervals while enforcing continuity (and first-derivative continuity) at segment boundaries. On illustrative ODE benchmarks and a heat-equation example, Horner networks with tens (or fewer) parameters accurately match the solution and its derivatives and outperform small MLP and sinusoidal-representation baselines under the same training settings, demonstrating a practical accuracy-parameter trade-off for resource-efficient scientific modeling. Comments: Subjects: Machine Learning (cs.LG); Signal Processing ...

Related Articles

Machine Learning

Is google deepmind known to ghost applicants? [D]

Hey sub, I'm sorry if this is a wrong place to ask but I don't see a sub for ML roles separately. I was wondering if deepmind is known to...

Reddit - Machine Learning · 1 min ·
Llms

OpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought ab...

Reddit - Artificial Intelligence · 1 min ·
Llms

A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence.

SmolLM2 135M. Lenovo T14 CPU. No GPU. No RLHF. No BPE. Coherent, non-sycophantic, contextually appropriate output. First message. No prio...

Reddit - Artificial Intelligence · 1 min ·
Llms

OpenClaw + Claude might get harder to use going forward (creator just confirmed)

Just saw a post from Peter Steinberger (creator of OpenClaw) saying that it’s likely going to get harder in the future to keep OpenClaw w...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime