[2510.20351] Evaluating Latent Knowledge of Public Tabular Datasets in Large Language Models
About this article
Abstract page for arXiv paper 2510.20351: Evaluating Latent Knowledge of Public Tabular Datasets in Large Language Models
Computer Science > Computation and Language arXiv:2510.20351 (cs) [Submitted on 23 Oct 2025 (v1), last revised 30 Mar 2026 (this version, v2)] Title:Evaluating Latent Knowledge of Public Tabular Datasets in Large Language Models Authors:Matteo Silvestri, Fabiano Veglianti, Flavio Giorgi, Fabrizio Silvestri, Gabriele Tolomei View a PDF of the paper titled Evaluating Latent Knowledge of Public Tabular Datasets in Large Language Models, by Matteo Silvestri and 4 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly exposed to data contamination, i.e., performance gains driven by prior exposure of test datasets rather than generalization. However, in the context of tabular data, this problem is largely unexplored. Existing approaches primarily rely on memorization tests, which are too coarse to detect contamination. In contrast, we propose a framework for assessing contamination in tabular datasets by generating controlled queries and performing comparative evaluation. Given a dataset, we craft multiple-choice aligned queries that preserve task structure while allowing systematic transformations of the underlying data. These transformations are designed to selectively disrupt dataset information while preserving partial knowledge, enabling us to isolate performance attributable to contamination. We complement this setup with non-neural baselines that provide reference performance, and we introduce a statistical testing procedure to f...