[2602.14495] Divine Benevolence is an $x^2$: GLUs scale asymptotically faster than MLPs
Summary
This paper explores the scaling laws of Gated Linear Units (GLUs) compared to Multi-Layer Perceptrons (MLPs), demonstrating that GLUs scale asymptotically faster due to their quadratic functional forms.
Why It Matters
Understanding the scaling behavior of different neural network architectures is crucial for advancing machine learning models. This research provides insights into the design of more efficient models, potentially impacting various applications in AI and machine learning.
Key Takeaways
- GLUs exhibit asymptotic scaling behavior that outpaces MLPs.
- The paper introduces the Gated Quadratic Unit, which promises even better scaling.
- Numerical analysis is applied to understand model architecture choices.
- Empirical verification supports the theoretical findings on scaling slopes.
- The research opens avenues for designing superior large models based on first principles.
Computer Science > Machine Learning arXiv:2602.14495 (cs) [Submitted on 16 Feb 2026] Title:Divine Benevolence is an $x^2$: GLUs scale asymptotically faster than MLPs Authors:Alejandro Francisco Queiruga View a PDF of the paper titled Divine Benevolence is an $x^2$: GLUs scale asymptotically faster than MLPs, by Alejandro Francisco Queiruga View PDF HTML (experimental) Abstract:Scaling laws can be understood from ground-up numerical analysis, where traditional function approximation theory can explain shifts in model architecture choices. GLU variants now dominate frontier LLMs and similar outer-product architectures are prevalent in ranking models. The success of these architectures has mostly been left as an empirical discovery. In this paper, we apply the tools of numerical analysis to expose a key factor: these models have an $x^2$ which enables \emph{asymptotically} faster scaling than MLPs. GLUs have piecewise quadratic functional forms that are sufficient to exhibit quadratic order of approximation. Our key contribution is to demonstrate that the $L(P)$ scaling slope is $L(P)\propto P^{-3}$ for GLUs but only $L(P)=P^{-2}$ for MLPs on function reconstruction problems. We provide a parameter construction and empirical verification of these slopes for 1D function approximation. From the first principles we discover, we make one stride and propose the ``Gated Quadratic Unit'' which has an even steeper $L(P)$ slope than the GLU and MLP. This opens the possibility of archi...