[2511.15487] NTK-Guided Implicit Neural Teaching
Summary
The paper presents NTK-Guided Implicit Neural Teaching (NINT), a method that accelerates training of Implicit Neural Representations (INRs) by optimizing coordinate selection, resulting in faster convergence and reduced training time while maintaining representation quality.
Why It Matters
This research addresses the computational challenges associated with training Implicit Neural Representations, which are increasingly used in various applications like image and audio reconstruction. By improving training efficiency, this method could enhance the practicality of INRs in real-world scenarios, making advanced machine learning techniques more accessible and effective.
Key Takeaways
- NINT accelerates training by dynamically selecting coordinates for optimization.
- Utilizes Neural Tangent Kernel (NTK) to enhance training efficiency.
- Achieves nearly 50% reduction in training time without sacrificing quality.
- Demonstrates state-of-the-art performance in sampling-based strategies.
- Addresses critical computational costs in high-resolution signal fitting.
Computer Science > Machine Learning arXiv:2511.15487 (cs) [Submitted on 19 Nov 2025 (v1), last revised 25 Feb 2026 (this version, v2)] Title:NTK-Guided Implicit Neural Teaching Authors:Chen Zhang, Wei Zuo, Bingyang Cheng, Yikun Wang, Wei-Bin Kou, Yik Chung WU, Ngai Wong View a PDF of the paper titled NTK-Guided Implicit Neural Teaching, by Chen Zhang and 6 other authors View PDF HTML (experimental) Abstract:Implicit Neural Representations (INRs) parameterize continuous signals via multilayer perceptrons (MLPs), enabling compact, resolution-independent modeling for tasks like image, audio, and 3D reconstruction. However, fitting high-resolution signals demands optimizing over millions of coordinates, incurring prohibitive computational costs. To address it, we propose NTK-Guided Implicit Neural Teaching (NINT), which accelerates training by dynamically selecting coordinates that maximize global functional updates. Leveraging the Neural Tangent Kernel (NTK), NINT scores examples by the norm of their NTK-augmented loss gradients, capturing both fitting errors and heterogeneous leverage (self-influence and cross-coordinate coupling). This dual consideration enables faster convergence compared to existing methods. Through extensive experiments, we demonstrate that NINT significantly reduces training time by nearly half while maintaining or improving representation quality, establishing state-of-the-art acceleration among recent sampling-based strategies. Comments: Subjects: Mac...