[2602.16337] Subtractive Modulative Network with Learnable Periodic Activations
Summary
The paper presents the Subtractive Modulative Network (SMN), a new architecture for implicit neural representations that enhances parameter efficiency and reconstruction accuracy in image processing tasks.
Why It Matters
This research introduces a novel approach to neural network architecture that combines principles from signal processing with machine learning, potentially improving the efficiency and effectiveness of image reconstruction and 3D synthesis tasks. Its implications could lead to advancements in computer vision applications.
Key Takeaways
- The SMN architecture utilizes a learnable periodic activation layer for enhanced signal processing.
- It achieves a PSNR of over 40 dB on image datasets, indicating high reconstruction accuracy.
- The architecture shows consistent advantages in 3D novel view synthesis tasks compared to existing methods.
- The design is inspired by classical subtractive synthesis, merging traditional techniques with modern neural networks.
- The empirical validation supports the theoretical framework proposed in the study.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.16337 (cs) [Submitted on 18 Feb 2026] Title:Subtractive Modulative Network with Learnable Periodic Activations Authors:Tiou Wang, Zhuoqian Yang, Markus Flierl, Mathieu Salzmann, Sabine Süsstrunk View a PDF of the paper titled Subtractive Modulative Network with Learnable Periodic Activations, by Tiou Wang and 4 other authors View PDF HTML (experimental) Abstract:We propose the Subtractive Modulative Network (SMN), a novel, parameter-efficient Implicit Neural Representation (INR) architecture inspired by classical subtractive synthesis. The SMN is designed as a principled signal processing pipeline, featuring a learnable periodic activation layer (Oscillator) that generates a multi-frequency basis, and a series of modulative mask modules (Filters) that actively generate high-order harmonics. We provide both theoretical analysis and empirical validation for our design. Our SMN achieves a PSNR of $40+$ dB on two image datasets, comparing favorably against state-of-the-art methods in terms of both reconstruction accuracy and parameter efficiency. Furthermore, consistent advantage is observed on the challenging 3D NeRF novel view synthesis task. Supplementary materials are available at this https URL. Comments: Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) Cite as: arXiv:2602.16337 [cs.CV] (or arXiv:2602.16337v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv...