[2512.10938] Stronger Normalization-Free Transformers
About this article
Abstract page for arXiv paper 2512.10938: Stronger Normalization-Free Transformers
Computer Science > Machine Learning arXiv:2512.10938 (cs) [Submitted on 11 Dec 2025 (v1), last revised 31 Mar 2026 (this version, v2)] Title:Stronger Normalization-Free Transformers Authors:Mingzhi Chen, Taiming Lu, Jiachen Zhu, Mingjie Sun, Zhuang Liu View a PDF of the paper titled Stronger Normalization-Free Transformers, by Mingzhi Chen and 4 other authors View PDF HTML (experimental) Abstract:Although normalization layers have long been viewed as indispensable components of deep learning architectures, the recent introduction of Dynamic Tanh (DyT) has demonstrated that alternatives are possible. The point-wise function DyT constrains extreme values for stable convergence and reaches normalization-level performance; this work seeks further for function designs that can surpass it. We first study how the intrinsic properties of point-wise functions influence training and performance. Building on these findings, we conduct a large-scale search for a more effective function design. Through this exploration, we introduce $\mathrm{Derf}(x) = \mathrm{erf}(\alpha x + s)$, where $\mathrm{erf}(x)$ is the rescaled Gaussian cumulative distribution function, and identify it as the most performant design. Derf outperforms LayerNorm, RMSNorm, and DyT across a wide range of domains, including visual recognition and generation, speech representation, and DNA sequence modeling. Our analysis also suggests that the performance gains of Derf largely stem from its improved generalization ra...