[2510.16028] TAO: Tolerance-Aware Optimistic Verification for Floating-Point Neural Networks
About this article
Abstract page for arXiv paper 2510.16028: TAO: Tolerance-Aware Optimistic Verification for Floating-Point Neural Networks
Computer Science > Cryptography and Security arXiv:2510.16028 (cs) [Submitted on 15 Oct 2025 (v1), last revised 2 Mar 2026 (this version, v3)] Title:TAO: Tolerance-Aware Optimistic Verification for Floating-Point Neural Networks Authors:Jianzhu Yao, Hongxu Su, Taobo Liao, Zerui Cheng, Huan Zhang, Xuechao Wang, Pramod Viswanath View a PDF of the paper titled TAO: Tolerance-Aware Optimistic Verification for Floating-Point Neural Networks, by Jianzhu Yao and 6 other authors View PDF HTML (experimental) Abstract:Neural networks increasingly run on hardware outside the user's control (cloud GPUs, inference marketplaces). Yet ML-as-a-Service reveals little about what actually ran or whether returned outputs faithfully reflect the intended inputs. Users lack recourse against service downgrades (model swaps, quantization, graph rewrites, or discrepancies like altered ad embeddings). Verifying outputs is hard because floating-point(FP) execution on heterogeneous accelerators is inherently nondeterministic. Existing approaches are either impractical for real FP neural networks or reintroduce vendor trust. We present TAO: a Tolerance Aware Optimistic verification protocol that accepts outputs within principled operator-level acceptance regions rather than requiring bitwise equality. TAO combines two error models: (i) sound per-operator IEEE-754 worst-case bounds and (ii) tight empirical percentile profiles calibrated across hardware. Discrepancies trigger a Merkle-anchored, threshold...