[2603.25009] A Systematic Empirical Study of Grokking: Depth, Architecture, Activation, and Regularization
About this article
Abstract page for arXiv paper 2603.25009: A Systematic Empirical Study of Grokking: Depth, Architecture, Activation, and Regularization
Computer Science > Machine Learning arXiv:2603.25009 (cs) [Submitted on 26 Mar 2026] Title:A Systematic Empirical Study of Grokking: Depth, Architecture, Activation, and Regularization Authors:Shalima Binta Manir, Anamika Paul Rupa View a PDF of the paper titled A Systematic Empirical Study of Grokking: Depth, Architecture, Activation, and Regularization, by Shalima Binta Manir and 1 other authors View PDF HTML (experimental) Abstract:Grokking the delayed transition from memorization to generalization in neural networks remains poorly understood, in part because prior empirical studies confound the roles of architecture, optimization, and regularization. We present a controlled study that systematically disentangles these factors on modular addition (mod 97), with matched and carefully tuned training regimes across models. Our central finding is that grokking dynamics are not primarily determined by architecture, but by interactions between optimization stability and regularization. Specifically, we show: (1) \textbf{depth has a non-monotonic effect}, with depth-4 MLPs consistently failing to grok while depth-8 residual networks recover generalization, demonstrating that depth requires architectural stabilization; (2) \textbf{the apparent gap between Transformers and MLPs largely disappears} (1.11$\times$ delay) under matched hyperparameters, indicating that previously reported differences are largely due to optimizer and regularization confounds; (3) \textbf{activation fu...