[2602.22359] Scaling In, Not Up? Testing Thick Citation Context Analysis with GPT-5 and Fragile Prompts
Summary
This paper explores the effectiveness of GPT-5 in interpretative citation context analysis (CCA) by employing thick, text-grounded readings, emphasizing the impact of prompt sensitivity on outcomes.
Why It Matters
Understanding how large language models like GPT-5 can aid in citation context analysis is crucial for researchers. This study highlights the potential and limitations of using AI for nuanced interpretative tasks, which can influence academic writing and research methodologies.
Key Takeaways
- GPT-5 can support interpretative citation context analysis effectively.
- Prompt sensitivity significantly affects the model's output and interpretative choices.
- The study identifies 21 recurring interpretative moves in citation analysis.
- Scaffolding and framing of prompts can shift the focus of the model's analysis.
- The findings suggest both opportunities and risks in using LLMs for academic analysis.
Computer Science > Computation and Language arXiv:2602.22359 (cs) [Submitted on 25 Feb 2026] Title:Scaling In, Not Up? Testing Thick Citation Context Analysis with GPT-5 and Fragile Prompts Authors:Arno Simons View a PDF of the paper titled Scaling In, Not Up? Testing Thick Citation Context Analysis with GPT-5 and Fragile Prompts, by Arno Simons View PDF Abstract:This paper tests whether large language models (LLMs) can support interpretative citation context analysis (CCA) by scaling in thick, text-grounded readings of a single hard case rather than scaling up typological labels. It foregrounds prompt-sensitivity analysis as a methodological issue by varying prompt scaffolding and framing in a balanced 2x3 design. Using footnote 6 in Chubin and Moitra (1975) and Gilbert's (1977) reconstruction as a probe, I implement a two-stage GPT-5 pipeline: a citation-text-only surface classification and expectation pass, followed by cross-document interpretative reconstruction using the citing and cited full texts. Across 90 reconstructions, the model produces 450 distinct hypotheses. Close reading and inductive coding identify 21 recurring interpretative moves, and linear probability models estimate how prompt choices shift their frequencies and lexical repertoire. GPT-5's surface pass is highly stable, consistently classifying the citation as "supplementary". In reconstruction, the model generates a structured space of plausible alternatives, but scaffolding and examples redistribu...