[2603.22332] Large Language Models for Missing Data Imputation: Understanding Behavior, Hallucination Effects, and Control Mechanisms
About this article
Abstract page for arXiv paper 2603.22332: Large Language Models for Missing Data Imputation: Understanding Behavior, Hallucination Effects, and Control Mechanisms
Computer Science > Machine Learning arXiv:2603.22332 (cs) [Submitted on 20 Mar 2026] Title:Large Language Models for Missing Data Imputation: Understanding Behavior, Hallucination Effects, and Control Mechanisms Authors:Arthur Dantas Mangussi, Ricardo Cardoso Pereira, Ana Carolina Lorena, Pedro Henriques Abreu View a PDF of the paper titled Large Language Models for Missing Data Imputation: Understanding Behavior, Hallucination Effects, and Control Mechanisms, by Arthur Dantas Mangussi and Ricardo Cardoso Pereira and Ana Carolina Lorena and Pedro Henriques Abreu View PDF HTML (experimental) Abstract:Data imputation is a cornerstone technique for handling missing values in real-world datasets, which are often plagued by missingness. Despite recent progress, prior studies on Large Language Models-based imputation remain limited by scalability challenges, restricted cross-model comparisons, and evaluations conducted on small or domain-specific datasets. Furthermore, heterogeneous experimental protocols and inconsistent treatment of missingness mechanisms (MCAR, MAR, and MNAR) hinder systematic benchmarking across methods. This work investigates the robustness of Large Language Models for missing data imputation in tabular datasets using a zero-shot prompt engineering approach. To this end, we present a comprehensive benchmarking study comparing five widely used LLMs against six state-of-the-art imputation baselines. The experimental design evaluates these methods across 29 da...