[2603.03612] Why Are Linear RNNs More Parallelizable?
About this article
Abstract page for arXiv paper 2603.03612: Why Are Linear RNNs More Parallelizable?
Computer Science > Machine Learning arXiv:2603.03612 (cs) [Submitted on 4 Mar 2026] Title:Why Are Linear RNNs More Parallelizable? Authors:William Merrill, Hongjian Jiang, Yanhong Li, Ashish Sabharwal View a PDF of the paper titled Why Are Linear RNNs More Parallelizable?, by William Merrill and Hongjian Jiang and Yanhong Li and Ashish Sabharwal View PDF HTML (experimental) Abstract:The community is increasingly exploring linear RNNs (LRNNs) as language models, motivated by their expressive power and parallelizability. While prior work establishes the expressivity benefits of LRNNs over transformers, it is unclear what makes LRNNs -- but not traditional, nonlinear RNNs -- as easy to parallelize in practice as transformers. We answer this question by providing a tight connection between types of RNNs and standard complexity classes. We show that LRNNs can be viewed as log-depth (bounded fan-in) arithmetic circuits, which represents only a slight depth overhead relative to log-depth boolean circuits that transformers admit. Furthermore, we show that nonlinear RNNs can solve $\mathsf{L}$-complete problems (and even $\mathsf{P}$-complete ones, under polynomial precision), revealing a fundamental barrier to parallelizing them as efficiently as transformers. Our theory also identifies fine-grained expressivity differences between recent popular LRNN variants: permutation-diagonal LRNNs are $\mathsf{NC}^1$-complete whereas diagonal-plus-low-rank LRNNs are more expressive ($\maths...