[2602.16530] FEKAN: Feature-Enriched Kolmogorov-Arnold Networks

[2602.16530] FEKAN: Feature-Enriched Kolmogorov-Arnold Networks

arXiv - Machine Learning 4 min read Article

Summary

The paper introduces Feature-Enriched Kolmogorov-Arnold Networks (FEKAN), an advanced model that enhances computational efficiency and predictive accuracy compared to traditional Kolmogorov-Arnold Networks (KANs) without increasing trainable parameters.

Why It Matters

FEKAN addresses the limitations of existing KAN architectures by improving convergence speed and representation capacity, making it a significant advancement in machine learning. This innovation is crucial for applications requiring efficient function approximation and solving complex partial differential equations.

Key Takeaways

  • FEKAN improves upon KANs by enhancing computational efficiency.
  • The model accelerates convergence and increases representation capacity.
  • FEKAN outperforms various KAN variants in function approximation tasks.
  • The theoretical foundations of FEKAN support its superior performance.
  • FEKAN maintains the same number of trainable parameters as KANs.

Computer Science > Machine Learning arXiv:2602.16530 (cs) [Submitted on 18 Feb 2026] Title:FEKAN: Feature-Enriched Kolmogorov-Arnold Networks Authors:Sidharth S. Menon, Ameya D. Jagtap View a PDF of the paper titled FEKAN: Feature-Enriched Kolmogorov-Arnold Networks, by Sidharth S. Menon and Ameya D. Jagtap View PDF HTML (experimental) Abstract:Kolmogorov-Arnold Networks (KANs) have recently emerged as a compelling alternative to multilayer perceptrons, offering enhanced interpretability via functional decomposition. However, existing KAN architectures, including spline-, wavelet-, radial-basis variants, etc., suffer from high computational cost and slow convergence, limiting scalability and practical applicability. Here, we introduce Feature-Enriched Kolmogorov-Arnold Networks (FEKAN), a simple yet effective extension that preserves all the advantages of KAN while improving computational efficiency and predictive accuracy through feature enrichment, without increasing the number of trainable parameters. By incorporating these additional features, FEKAN accelerates convergence, increases representation capacity, and substantially mitigates the computational overhead characteristic of state-of-the-art KAN architectures. We investigate FEKAN across a comprehensive set of benchmarks, including function-approximation tasks, physics-informed formulations for diverse partial differential equations (PDEs), and neural operator settings that map between input and output function sp...

Related Articles

Machine Learning

ICML 2026 am I cooked? [D]

Hi, I am currently making the jump to ML from theoretical physics. I just got done with the review period, went from 4333 to 4433, but th...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Dealing with an unprofessional reviewer using fake references and personal attacks in ICML26

We are currently facing an ICML 2026 reviewer who lowered the score to a 1 (Confidence 5) while ignoring our rebuttal and relying on fake...

Reddit - Machine Learning · 1 min ·
Open Source Ai

Hugging Face contributes Safetensors to PyTorch Foundation to secure AI model execution

submitted by /u/Fcking_Chuck [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] The Lyra Technique — A framework for interpreting internal cognitive states in LLMs (Zenodo, open access)

We're releasing a paper on a new framework for reading and interpreting the internal cognitive states of large language models: "The Lyra...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime