[2512.12072] VOYAGER: A Training Free Approach for Generating Diverse Datasets using LLMs
About this article
Abstract page for arXiv paper 2512.12072: VOYAGER: A Training Free Approach for Generating Diverse Datasets using LLMs
Computer Science > Computation and Language arXiv:2512.12072 (cs) [Submitted on 12 Dec 2025 (v1), last revised 27 Apr 2026 (this version, v2)] Title:VOYAGER: A Training Free Approach for Generating Diverse Datasets using LLMs Authors:Avinash Amballa, Yashas Malur Saidutta, Chi-Heng Lin, Vivek Kulkarni, Srinivas Chappidi View a PDF of the paper titled VOYAGER: A Training Free Approach for Generating Diverse Datasets using LLMs, by Avinash Amballa and 4 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly being used to generate synthetic datasets for the evaluation and training of downstream models. However, prior work has noted that such generated data lacks diversity. In this paper, we propose Voyager, a novel principled approach to generate diverse datasets. Our approach is iterative and directly optimizes a mathematical quantity that optimizes the diversity of the dataset using the machinery of determinantal point processes. Furthermore, our approach is training-free, applicable to closed-source models, and scalable. In addition to providing theoretical justification for the working of our method, we also demonstrate through comprehensive experiments that Voyager significantly outperforms popular baseline approaches by providing a 1.5-3 times improvement in diversity. Comments: Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG) Cite as: arXiv:2512.12072 [cs.CL] (or arXiv:2512.12072v2 [cs.CL] for this versio...