[2602.18710] Many AI Analysts, One Dataset: Navigating the Agentic Data Science Multiverse
Summary
The paper explores how autonomous AI analysts using large language models can replicate diverse analytic outcomes on the same dataset, revealing significant variability in results based on model and prompt variations.
Why It Matters
This research highlights the potential of AI in data analysis, showcasing how different analytic approaches can lead to varied conclusions even with the same data. Understanding this variability is crucial for improving the reliability of AI-driven insights in scientific research and decision-making.
Key Takeaways
- AI analysts can produce diverse analytic outcomes on the same dataset.
- Variability in results is influenced by model choice and prompt framing.
- The study emphasizes the importance of methodological transparency in AI-driven analyses.
Computer Science > Artificial Intelligence arXiv:2602.18710 (cs) [Submitted on 21 Feb 2026] Title:Many AI Analysts, One Dataset: Navigating the Agentic Data Science Multiverse Authors:Martin Bertran, Riccardo Fogliato, Zhiwei Steven Wu View a PDF of the paper titled Many AI Analysts, One Dataset: Navigating the Agentic Data Science Multiverse, by Martin Bertran and Riccardo Fogliato and Zhiwei Steven Wu View PDF Abstract:The conclusions of empirical research depend not only on data but on a sequence of analytic decisions that published results seldom make explicit. Past ``many-analyst" studies have demonstrated this: independent teams testing the same hypothesis on the same dataset regularly reach conflicting conclusions. But such studies require months of coordination among dozens of research groups and are therefore rarely conducted. In this work, we show that fully autonomous AI analysts built on large language models (LLMs) can reproduce a similar structured analytic diversity cheaply and at scale. We task these AI analysts with testing a pre-specified hypothesis on a fixed dataset, varying the underlying model and prompt framing across replicate runs. Each AI analyst independently constructs and executes a full analysis pipeline; an AI auditor then screens each run for methodological validity. Across three datasets spanning experimental and observational designs, AI analyst-produced analyses display wide dispersion in effect sizes, $p$-values, and binary decisions on ...