[2510.26840] SpotIt: Evaluating Text-to-SQL Evaluation with Formal Verification
About this article
Abstract page for arXiv paper 2510.26840: SpotIt: Evaluating Text-to-SQL Evaluation with Formal Verification
Computer Science > Databases arXiv:2510.26840 (cs) [Submitted on 30 Oct 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:SpotIt: Evaluating Text-to-SQL Evaluation with Formal Verification Authors:Rocky Klopfenstein, Yang He, Andrew Tremante, Yuepeng Wang, Nina Narodytska, Haoze Wu View a PDF of the paper titled SpotIt: Evaluating Text-to-SQL Evaluation with Formal Verification, by Rocky Klopfenstein and 5 other authors View PDF HTML (experimental) Abstract:Community-driven Text-to-SQL evaluation platforms play a pivotal role in tracking the state of the art of Text-to-SQL performance. The reliability of the evaluation process is critical for driving progress in the field. Current evaluation methods are largely test-based, which involves comparing the execution results of a generated SQL query and a human-labeled ground-truth on a static test database. Such an evaluation is optimistic, as two queries can coincidentally produce the same output on the test database while actually being different. In this work, we propose a new alternative evaluation pipeline, called SpotIt, where a formal bounded equivalence verification engine actively searches for a database that differentiates the generated and ground-truth SQL queries. We develop techniques to extend existing verifiers to support a richer SQL subset relevant to Text-to-SQL. A performance evaluation of ten Text-to-SQL methods on the high-profile BIRD dataset suggests that test-based methods can often overlook d...