[2603.19264] Generative Active Testing: Efficient LLM Evaluation via Proxy Task Adaptation
About this article
Abstract page for arXiv paper 2603.19264: Generative Active Testing: Efficient LLM Evaluation via Proxy Task Adaptation
Computer Science > Computation and Language arXiv:2603.19264 (cs) [Submitted on 26 Feb 2026] Title:Generative Active Testing: Efficient LLM Evaluation via Proxy Task Adaptation Authors:Aashish Anantha Ramakrishnan, Ardavan Saeedi, Hamid Reza Hassanzadeh, Fazlolah Mohaghegh, Dongwon Lee View a PDF of the paper titled Generative Active Testing: Efficient LLM Evaluation via Proxy Task Adaptation, by Aashish Anantha Ramakrishnan and 4 other authors View PDF HTML (experimental) Abstract:With the widespread adoption of pre-trained Large Language Models (LLM), there exists a high demand for task-specific test sets to benchmark their performance in domains such as healthcare and biomedicine. However, the cost of labeling test samples while developing new benchmarks poses a significant challenge, especially when expert annotators are required. Existing frameworks for active sample selection offer limited support for generative Question Answering tasks, where option dynamics can affect model decision boundaries. In this paper, we present Generative Active Testing (GAT), an uncertainty-aware acquisition framework leveraging LLMs as surrogates for informing the sample selection process. Using a novel Statement Adaptation Module, we modify generative tasks into a pseudo-classification format, enabling the capture of sample-level uncertainties across unlabeled candidates. Our zero-shot acquisition functions reduce estimation error by ~40% compared to traditional sampling baselines, offe...