[2510.03638] Expressive Power of Implicit Models: Rich Equilibria and Test-Time Scaling
About this article
Abstract page for arXiv paper 2510.03638: Expressive Power of Implicit Models: Rich Equilibria and Test-Time Scaling
Computer Science > Machine Learning arXiv:2510.03638 (cs) [Submitted on 4 Oct 2025 (v1), last revised 1 Mar 2026 (this version, v3)] Title:Expressive Power of Implicit Models: Rich Equilibria and Test-Time Scaling Authors:Jialin Liu, Lisang Ding, Stanley Osher, Wotao Yin View a PDF of the paper titled Expressive Power of Implicit Models: Rich Equilibria and Test-Time Scaling, by Jialin Liu and 3 other authors View PDF Abstract:Implicit models, an emerging model class, compute outputs by iterating a single parameter block to a fixed point. This architecture realizes an infinite-depth, weight-tied network that trains with constant memory, significantly reducing memory needs for the same level of performance compared to explicit models. While it is empirically known that these compact models can often match or even exceed the accuracy of larger explicit networks by allocating more test-time compute, the underlying mechanism remains poorly understood. We study this gap through a nonparametric analysis of expressive power. We provide a strict mathematical characterization, showing that a simple and regular implicit operator can, through iteration, progressively express more complex mappings. We prove that for a broad class of implicit models, this process lets the model's expressive power scale with test-time compute, ultimately matching a much richer function class. The theory is validated across four domains: image reconstruction, scientific computing, operations research, an...