[2603.02973] On the Topology of Neural Network Superlevel Sets
About this article
Abstract page for arXiv paper 2603.02973: On the Topology of Neural Network Superlevel Sets
Computer Science > Machine Learning arXiv:2603.02973 (cs) [Submitted on 3 Mar 2026] Title:On the Topology of Neural Network Superlevel Sets Authors:Bahman Gharesifard View a PDF of the paper titled On the Topology of Neural Network Superlevel Sets, by Bahman Gharesifard View PDF HTML (experimental) Abstract:We show that neural networks with activations satisfying a Riccati-type ordinary differential equation condition, an assumption arising in recent universal approximation results in the uniform topology, produce Pfaffian outputs on analytic domains with format controlled only by the architecture. Consequently, superlevel sets, as well as Lie bracket rank drop loci for neural network parameterized vector fields, admit architecture-only bounds on topological complexity, in particular on total Betti numbers, uniformly over all weights. Subjects: Machine Learning (cs.LG); Optimization and Control (math.OC) Cite as: arXiv:2603.02973 [cs.LG] (or arXiv:2603.02973v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2603.02973 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Bahman Gharesifard [view email] [v1] Tue, 3 Mar 2026 13:30:06 UTC (16 KB) Full-text links: Access Paper: View a PDF of the paper titled On the Topology of Neural Network Superlevel Sets, by Bahman GharesifardView PDFHTML (experimental)TeX Source view license Current browse context: cs.LG < prev | next > new | recent | 2026-03 Change to browse by:...