[2504.05806] Meta-Continual Learning of Neural Fields
Summary
The paper introduces Meta-Continual Learning of Neural Fields (MCL-NF), a novel approach that enhances the efficiency and quality of neural field learning through a modular architecture and Fisher Information Maximization loss.
Why It Matters
This research addresses critical challenges in continual learning, such as catastrophic forgetting and slow convergence, which are significant barriers in AI development. By improving neural field adaptation, this work has implications for various applications, including image and video processing.
Key Takeaways
- Introduces MCL-NF, a new problem setting for neural fields.
- Combines modular architecture with optimization-based meta-learning.
- Implements Fisher Information Maximization loss to enhance learning generalization.
- Demonstrates superior performance in reconstruction tasks across multiple datasets.
- Achieves rapid adaptation for city-scale NeRF rendering with fewer parameters.
Computer Science > Artificial Intelligence arXiv:2504.05806 (cs) [Submitted on 8 Apr 2025 (v1), last revised 23 Feb 2026 (this version, v2)] Title:Meta-Continual Learning of Neural Fields Authors:Seungyoon Woo, Junhyeog Yun, Gunhee Kim View a PDF of the paper titled Meta-Continual Learning of Neural Fields, by Seungyoon Woo and 2 other authors View PDF HTML (experimental) Abstract:Neural Fields (NF) have gained prominence as a versatile framework for complex data representation. This work unveils a new problem setting termed \emph{Meta-Continual Learning of Neural Fields} (MCL-NF) and introduces a novel strategy that employs a modular architecture combined with optimization-based meta-learning. Focused on overcoming the limitations of existing methods for continual learning of neural fields, such as catastrophic forgetting and slow convergence, our strategy achieves high-quality reconstruction with significantly improved learning speed. We further introduce Fisher Information Maximization loss for neural radiance fields (FIM-NeRF), which maximizes information gains at the sample level to enhance learning generalization, with proved convergence guarantee and generalization bound. We perform extensive evaluations across image, audio, video reconstruction, and view synthesis tasks on six diverse datasets, demonstrating our method's superiority in reconstruction quality and speed over existing MCL and CL-NF approaches. Notably, our approach attains rapid adaptation of neural f...