[2603.28113] Lipschitz verification of neural networks through training
About this article
Abstract page for arXiv paper 2603.28113: Lipschitz verification of neural networks through training
Computer Science > Machine Learning arXiv:2603.28113 (cs) [Submitted on 30 Mar 2026] Title:Lipschitz verification of neural networks through training Authors:Simon Kuang, Yuezhu Xu, S. Sivaranjani, Xinfan Lin View a PDF of the paper titled Lipschitz verification of neural networks through training, by Simon Kuang and 3 other authors View PDF HTML (experimental) Abstract:The global Lipschitz constant of a neural network governs both adversarial robustness and generalization. Conventional approaches to ``certified training" typically follow a train-then-verify paradigm: they train a network and then attempt to bound its Lipschitz constant. Because the efficient ``trivial bound" (the product of the layerwise Lipschitz constants) is exponentially loose for arbitrary networks, these approaches must rely on computationally expensive techniques such as semidefinite programming, mixed-integer programming, or branch-and-bound. We propose a different paradigm: rather than designing complex verifiers for arbitrary networks, we design networks to be verifiable by the fast trivial bound. We show that directly penalizing the trivial bound during training forces it to become tight, thereby effectively regularizing the true Lipschitz constant. To achieve this, we identify three structural obstructions to a tight trivial bound (dead neurons, bias terms, and ill-conditioned weights) and introduce architectural mitigations, including a novel notion of norm-saturating polyactivations and bias...