[2604.06109] Learning $\mathsf{AC}^0$ Under Graphical Models
About this article
Abstract page for arXiv paper 2604.06109: Learning $\mathsf{AC}^0$ Under Graphical Models
Computer Science > Machine Learning arXiv:2604.06109 (cs) [Submitted on 7 Apr 2026] Title:Learning $\mathsf{AC}^0$ Under Graphical Models Authors:Gautam Chandrasekaran, Jason Gaitonde, Ankur Moitra, Arsen Vasilyan View a PDF of the paper titled Learning $\mathsf{AC}^0$ Under Graphical Models, by Gautam Chandrasekaran and 3 other authors View PDF HTML (experimental) Abstract:In a landmark result, Linial, Mansour and Nisan (J. ACM 1993) gave a quasipolynomial-time algorithm for learning constant-depth circuits given labeled i.i.d. samples under the uniform distribution. Their work has had a deep and lasting legacy in computational learning theory, in particular introducing the $\textit{low-degree algorithm}$. However, an important critique of many results and techniques in the area is the reliance on product structure, which is unlikely to hold in realistic settings. Obtaining similar learning guarantees for more natural correlated distributions has been a longstanding challenge in the field. In particular, we give quasipolynomial-time algorithms for learning $\mathsf{AC}^0$ substantially beyond the product setting, when the inputs come from any graphical model with polynomial growth that exhibits strong spatial mixing. The main technical challenge is in giving a workaround to Fourier analysis, which we do by showing how new sampling algorithms allow us to transfer statements about low-degree polynomial approximation under the uniform setting to graphical models. Our approac...