[2603.13334] Lipschitz-Based Robustness Certification Under Floating-Point Execution
About this article
Abstract page for arXiv paper 2603.13334: Lipschitz-Based Robustness Certification Under Floating-Point Execution
Computer Science > Machine Learning arXiv:2603.13334 (cs) [Submitted on 6 Mar 2026 (v1), last revised 24 Mar 2026 (this version, v3)] Title:Lipschitz-Based Robustness Certification Under Floating-Point Execution Authors:Toby Murray View a PDF of the paper titled Lipschitz-Based Robustness Certification Under Floating-Point Execution, by Toby Murray View PDF HTML (experimental) Abstract:Sensitivity-based robustness certification has emerged as a practical approach for certifying neural network robustness, including in settings that require verifiable guarantees. A key advantage of these methods is that certification is performed by concrete numerical computation (rather than symbolic reasoning) and scales efficiently with network size. However, as with the vast majority of prior work on robustness certification and verification, the soundness of these methods is typically proved with respect to a semantic model that assumes exact real arithmetic. In reality deployed neural network implementations execute using floating-point arithmetic. This mismatch creates a semantic gap between certified robustness properties and the behaviour of the executed system. As motivating evidence, we exhibit concrete counterexamples showing that real arithmetic robustness guarantees can fail under floating-point execution, even for previously verified certifiers. Discrepancies become pronounced at lower-precision formats such as float16, and under adversarially constructed models reach semantic...