[2602.16530] FEKAN: Feature-Enriched Kolmogorov-Arnold Networks
Summary
The paper introduces Feature-Enriched Kolmogorov-Arnold Networks (FEKAN), an advanced model that enhances computational efficiency and predictive accuracy compared to traditional Kolmogorov-Arnold Networks (KANs) without increasing trainable parameters.
Why It Matters
FEKAN addresses the limitations of existing KAN architectures by improving convergence speed and representation capacity, making it a significant advancement in machine learning. This innovation is crucial for applications requiring efficient function approximation and solving complex partial differential equations.
Key Takeaways
- FEKAN improves upon KANs by enhancing computational efficiency.
- The model accelerates convergence and increases representation capacity.
- FEKAN outperforms various KAN variants in function approximation tasks.
- The theoretical foundations of FEKAN support its superior performance.
- FEKAN maintains the same number of trainable parameters as KANs.
Computer Science > Machine Learning arXiv:2602.16530 (cs) [Submitted on 18 Feb 2026] Title:FEKAN: Feature-Enriched Kolmogorov-Arnold Networks Authors:Sidharth S. Menon, Ameya D. Jagtap View a PDF of the paper titled FEKAN: Feature-Enriched Kolmogorov-Arnold Networks, by Sidharth S. Menon and Ameya D. Jagtap View PDF HTML (experimental) Abstract:Kolmogorov-Arnold Networks (KANs) have recently emerged as a compelling alternative to multilayer perceptrons, offering enhanced interpretability via functional decomposition. However, existing KAN architectures, including spline-, wavelet-, radial-basis variants, etc., suffer from high computational cost and slow convergence, limiting scalability and practical applicability. Here, we introduce Feature-Enriched Kolmogorov-Arnold Networks (FEKAN), a simple yet effective extension that preserves all the advantages of KAN while improving computational efficiency and predictive accuracy through feature enrichment, without increasing the number of trainable parameters. By incorporating these additional features, FEKAN accelerates convergence, increases representation capacity, and substantially mitigates the computational overhead characteristic of state-of-the-art KAN architectures. We investigate FEKAN across a comprehensive set of benchmarks, including function-approximation tasks, physics-informed formulations for diverse partial differential equations (PDEs), and neural operator settings that map between input and output function sp...