[2603.22154] dynActivation: A Trainable Activation Family for Adaptive Nonlinearity
About this article
Abstract page for arXiv paper 2603.22154: dynActivation: A Trainable Activation Family for Adaptive Nonlinearity
Computer Science > Machine Learning arXiv:2603.22154 (cs) [Submitted on 23 Mar 2026] Title:dynActivation: A Trainable Activation Family for Adaptive Nonlinearity Authors:Alois Bachmann View a PDF of the paper titled dynActivation: A Trainable Activation Family for Adaptive Nonlinearity, by Alois Bachmann View PDF Abstract:This paper proposes $\mathrm{dynActivation}$, a per-layer trainable activation defined as $f_i(x) = \mathrm{BaseAct}(x)(\alpha_i - \beta_i) + \beta_i x$, where $\alpha_i$ and $\beta_i$ are lightweight learned scalars that interpolate between the base nonlinearity and a linear path and $\mathrm{BaseAct}(x)$ resembles any ReLU-like function. The static and dynamic ReLU-like variants are then compared across multiple vision tasks, language modeling tasks, and ablation studies. The results suggest that dynActivation variants tend to linearize deep layers while maintaining high performance, which can improve training efficiency by up to $+54\%$ over ReLU. On CIFAR-10, dynActivation(Mish) improves over static Mish by up to $+14.02\%$ on AttentionCNN with an average improvment by $+6.00\%$, with a $24\%$ convergence-AUC reduction relative to Mish (2120 vs. 2785). In a 1-to-75-layer MNIST depth-scaling study, dynActivation never drops below $95\%$ test accuracy ($95.3$--$99.3\%$), while ReLU collapses below $80\%$ at 25 layers. Under FGSM at $\varepsilon{=}0.08$, dynActivation(Mish) incurs a $55.39\%$ accuracy drop versus $62.79\%$ for ReLU ($7.40\%$ advantage). ...