[2511.15487] NTK-Guided Implicit Neural Teaching

[2511.15487] NTK-Guided Implicit Neural Teaching

arXiv - Machine Learning 3 min read Article

Summary

The paper presents NTK-Guided Implicit Neural Teaching (NINT), a method that accelerates training of Implicit Neural Representations (INRs) by optimizing coordinate selection, resulting in faster convergence and reduced training time while maintaining representation quality.

Why It Matters

This research addresses the computational challenges associated with training Implicit Neural Representations, which are increasingly used in various applications like image and audio reconstruction. By improving training efficiency, this method could enhance the practicality of INRs in real-world scenarios, making advanced machine learning techniques more accessible and effective.

Key Takeaways

  • NINT accelerates training by dynamically selecting coordinates for optimization.
  • Utilizes Neural Tangent Kernel (NTK) to enhance training efficiency.
  • Achieves nearly 50% reduction in training time without sacrificing quality.
  • Demonstrates state-of-the-art performance in sampling-based strategies.
  • Addresses critical computational costs in high-resolution signal fitting.

Computer Science > Machine Learning arXiv:2511.15487 (cs) [Submitted on 19 Nov 2025 (v1), last revised 25 Feb 2026 (this version, v2)] Title:NTK-Guided Implicit Neural Teaching Authors:Chen Zhang, Wei Zuo, Bingyang Cheng, Yikun Wang, Wei-Bin Kou, Yik Chung WU, Ngai Wong View a PDF of the paper titled NTK-Guided Implicit Neural Teaching, by Chen Zhang and 6 other authors View PDF HTML (experimental) Abstract:Implicit Neural Representations (INRs) parameterize continuous signals via multilayer perceptrons (MLPs), enabling compact, resolution-independent modeling for tasks like image, audio, and 3D reconstruction. However, fitting high-resolution signals demands optimizing over millions of coordinates, incurring prohibitive computational costs. To address it, we propose NTK-Guided Implicit Neural Teaching (NINT), which accelerates training by dynamically selecting coordinates that maximize global functional updates. Leveraging the Neural Tangent Kernel (NTK), NINT scores examples by the norm of their NTK-augmented loss gradients, capturing both fitting errors and heterogeneous leverage (self-influence and cross-coordinate coupling). This dual consideration enables faster convergence compared to existing methods. Through extensive experiments, we demonstrate that NINT significantly reduces training time by nearly half while maintaining or improving representation quality, establishing state-of-the-art acceleration among recent sampling-based strategies. Comments: Subjects: Mac...

Related Articles

Llms

[P] I built an autonomous ML agent that runs experiments on tabular data indefinitely - inspired by Karpathy's AutoResearch

Inspired by Andrej Karpathy's AutoResearch, I built a system where Claude Code acts as an autonomous ML researcher on tabular binary clas...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Data curation and targeted replacement as a pre-training alignment and controllability method

Hi, r/MachineLearning: has much research been done in large-scale training scenarios where undesirable data has been replaced before trai...

Reddit - Machine Learning · 1 min ·
Llms

[R] BraiNN: An Experimental Neural Architecture with Working Memory, Relational Reasoning, and Adaptive Learning

BraiNN An Experimental Neural Architecture with Working Memory, Relational Reasoning, and Adaptive Learning BraiNN is a compact research‑...

Reddit - Machine Learning · 1 min ·
Machine Learning

[HIRING]Remote AI Training Jobs -Up to $1K/Week| Collaborators Wanted.USA

submitted by /u/nortonakenga [link] [comments]

Reddit - ML Jobs · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime