[2602.14737] Parameter-Minimal Neural DE Solvers via Horner Polynomials
Summary
The paper presents a novel neural architecture for solving differential equations using Horner polynomials, emphasizing minimal parameters while maintaining accuracy.
Why It Matters
This research is significant as it addresses the challenge of efficiently modeling differential equations with neural networks. By minimizing parameters, it offers a resource-efficient solution that can enhance scientific computing and machine learning applications.
Key Takeaways
- Introduces a parameter-minimal neural architecture for differential equations.
- Utilizes Horner-factorized polynomials for efficient computation.
- Enforces initial conditions directly in the model design.
- Demonstrates improved accuracy over traditional models with fewer parameters.
- Offers a practical approach for scientific modeling in resource-constrained environments.
Computer Science > Machine Learning arXiv:2602.14737 (cs) [Submitted on 16 Feb 2026] Title:Parameter-Minimal Neural DE Solvers via Horner Polynomials Authors:T. Matulić, D. Seršić View a PDF of the paper titled Parameter-Minimal Neural DE Solvers via Horner Polynomials, by T. Matuli\'c and D. Ser\v{s}i\'c View PDF HTML (experimental) Abstract:We propose a parameter-minimal neural architecture for solving differential equations by restricting the hypothesis class to Horner-factorized polynomials, yielding an implicit, differentiable trial solution with only a small set of learnable coefficients. Initial conditions are enforced exactly by construction by fixing the low-order polynomial degrees of freedom, so training focuses solely on matching the differential-equation residual at collocation points. To reduce approximation error without abandoning the low-parameter regime, we introduce a piecewise ("spline-like") extension that trains multiple small Horner models on subintervals while enforcing continuity (and first-derivative continuity) at segment boundaries. On illustrative ODE benchmarks and a heat-equation example, Horner networks with tens (or fewer) parameters accurately match the solution and its derivatives and outperform small MLP and sinusoidal-representation baselines under the same training settings, demonstrating a practical accuracy-parameter trade-off for resource-efficient scientific modeling. Comments: Subjects: Machine Learning (cs.LG); Signal Processing ...