[2603.22510] Do Large Language Models Reduce Research Novelty? Evidence from Information Systems Journals
About this article
Abstract page for arXiv paper 2603.22510: Do Large Language Models Reduce Research Novelty? Evidence from Information Systems Journals
Computer Science > Digital Libraries arXiv:2603.22510 (cs) [Submitted on 23 Mar 2026] Title:Do Large Language Models Reduce Research Novelty? Evidence from Information Systems Journals Authors:Ali Safari View a PDF of the paper titled Do Large Language Models Reduce Research Novelty? Evidence from Information Systems Journals, by Ali Safari View PDF Abstract:Large language models such as ChatGPT have increased scholarly output, but whether this productivity boost produces genuine intellectual advancement remains untested. I address this gap by measuring the semantic novelty of 13,847 articles published between 2020 and 2025 in 44 Information Systems journals. Using SPECTER2 embeddings, I operationalize novelty as the cosine distance between each paper and its nearest prior neighbors. A difference-in-differences design with the November 2022 release of ChatGPT as the treatment break reveals a heterogeneous pattern: authors affiliated with institutions in non-English-dominant countries show a 0.18 standard deviation decline in relative novelty compared to authors in English-dominant countries (beta = -0.176, p < 0.001), equivalent to a 7-percentile-point drop in the novelty distribution. This finding is robust across alternative novelty specifications, treatment break dates, and sub-samples, and survives a placebo test at a pre-treatment break. I interpret these results through the lens of construal level theory, proposing that LLMs function as proximity tools that shift res...