[2509.20986] SiNGER: A Clearer Voice Distills Vision Transformers Further
About this article
Abstract page for arXiv paper 2509.20986: SiNGER: A Clearer Voice Distills Vision Transformers Further
Computer Science > Computer Vision and Pattern Recognition arXiv:2509.20986 (cs) [Submitted on 25 Sep 2025 (v1), last revised 3 Mar 2026 (this version, v4)] Title:SiNGER: A Clearer Voice Distills Vision Transformers Further Authors:Geunhyeok Yu, Sunjae Jeong, Yoonyoung Choi, Jaeseung Kim, Hyoseok Hwang View a PDF of the paper titled SiNGER: A Clearer Voice Distills Vision Transformers Further, by Geunhyeok Yu and 4 other authors View PDF HTML (experimental) Abstract:Vision Transformers are widely adopted as the backbone of vision foundation models, but they are known to produce high-norm artifacts that degrade representation quality. When knowledge distillation transfers these features to students, high-norm artifacts dominate the objective, so students overfit to artifacts and underweight informative signals, diminishing the gains from larger models. Prior work attempted to remove artifacts but encountered an inherent trade-off between artifact suppression and preserving informative signals from teachers. To address this, we introduce Singular Nullspace-Guided Energy Reallocation (SiNGER), a novel distillation framework that suppresses artifacts while preserving informative signals. The key idea is principled teacher feature refinement: during refinement, we leverage the nullspace-guided perturbation to preserve information while suppressing artifacts. Then, the refined teacher's features are distilled to a student. We implement this perturbation efficiently with a LoRA-b...