[2602.20370] Quantitative Approximation Rates for Group Equivariant Learning
Summary
This paper explores quantitative approximation rates for group equivariant learning, demonstrating that equivariant architectures maintain expressivity comparable to traditional neural networks.
Why It Matters
Understanding the approximation capabilities of group equivariant models is crucial as they can leverage symmetries in data, potentially leading to more efficient learning in various applications. This research fills a gap in the literature, providing insights into the expressivity of these models.
Key Takeaways
- The paper provides quantitative approximation rates for group equivariant learning models.
- Equivariant architectures can achieve expressivity similar to traditional neural networks without loss of approximation power.
- The study covers several architectures, including Deep Sets and Transformers, highlighting their performance in learning equivariant functions.
Computer Science > Machine Learning arXiv:2602.20370 (cs) [Submitted on 23 Feb 2026] Title:Quantitative Approximation Rates for Group Equivariant Learning Authors:Jonathan W. Siegel, Snir Hordan, Hannah Lawrence, Ali Syed, Nadav Dym View a PDF of the paper titled Quantitative Approximation Rates for Group Equivariant Learning, by Jonathan W. Siegel and Snir Hordan and Hannah Lawrence and Ali Syed and Nadav Dym View PDF HTML (experimental) Abstract:The universal approximation theorem establishes that neural networks can approximate any continuous function on a compact set. Later works in approximation theory provide quantitative approximation rates for ReLU networks on the class of $\alpha$-Hölder functions $f: [0,1]^N \to \mathbb{R}$. The goal of this paper is to provide similar quantitative approximation results in the context of group equivariant learning, where the learned $\alpha$-Hölder function is known to obey certain group symmetries. While there has been much interest in the literature in understanding the universal approximation properties of equivariant models, very few quantitative approximation results are known for equivariant models. In this paper, we bridge this gap by deriving quantitative approximation rates for several prominent group-equivariant and invariant architectures. The architectures that we consider include: the permutation-invariant Deep Sets architecture; the permutation-equivariant Sumformer and Transformer architectures; joint invariance to...