[2602.22359] Scaling In, Not Up? Testing Thick Citation Context Analysis with GPT-5 and Fragile Prompts

[2602.22359] Scaling In, Not Up? Testing Thick Citation Context Analysis with GPT-5 and Fragile Prompts

arXiv - AI 4 min read Article

Summary

This paper explores the effectiveness of GPT-5 in interpretative citation context analysis (CCA) by employing thick, text-grounded readings, emphasizing the impact of prompt sensitivity on outcomes.

Why It Matters

Understanding how large language models like GPT-5 can aid in citation context analysis is crucial for researchers. This study highlights the potential and limitations of using AI for nuanced interpretative tasks, which can influence academic writing and research methodologies.

Key Takeaways

  • GPT-5 can support interpretative citation context analysis effectively.
  • Prompt sensitivity significantly affects the model's output and interpretative choices.
  • The study identifies 21 recurring interpretative moves in citation analysis.
  • Scaffolding and framing of prompts can shift the focus of the model's analysis.
  • The findings suggest both opportunities and risks in using LLMs for academic analysis.

Computer Science > Computation and Language arXiv:2602.22359 (cs) [Submitted on 25 Feb 2026] Title:Scaling In, Not Up? Testing Thick Citation Context Analysis with GPT-5 and Fragile Prompts Authors:Arno Simons View a PDF of the paper titled Scaling In, Not Up? Testing Thick Citation Context Analysis with GPT-5 and Fragile Prompts, by Arno Simons View PDF Abstract:This paper tests whether large language models (LLMs) can support interpretative citation context analysis (CCA) by scaling in thick, text-grounded readings of a single hard case rather than scaling up typological labels. It foregrounds prompt-sensitivity analysis as a methodological issue by varying prompt scaffolding and framing in a balanced 2x3 design. Using footnote 6 in Chubin and Moitra (1975) and Gilbert's (1977) reconstruction as a probe, I implement a two-stage GPT-5 pipeline: a citation-text-only surface classification and expectation pass, followed by cross-document interpretative reconstruction using the citing and cited full texts. Across 90 reconstructions, the model produces 450 distinct hypotheses. Close reading and inductive coding identify 21 recurring interpretative moves, and linear probability models estimate how prompt choices shift their frequencies and lexical repertoire. GPT-5's surface pass is highly stable, consistently classifying the citation as "supplementary". In reconstruction, the model generates a structured space of plausible alternatives, but scaffolding and examples redistribu...

Related Articles

Llms

Have Companies Began Adopting Claude Co-Work at an Enterprise Level?

Hi Guys, My company is considering purchasing the Claude Enterprise plan. The main two constraints are: - Being able to block usage of Cl...

Reddit - Artificial Intelligence · 1 min ·
Llms

What I learned about multi-agent coordination running 9 specialized Claude agents

I've been experimenting with multi-agent AI systems and ended up building something more ambitious than I originally planned: a fully ope...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] The problem with comparing AI memory system benchmarks — different evaluation methods make scores meaningless

I've been reviewing how various AI memory systems evaluate their performance and noticed a fundamental issue with cross-system comparison...

Reddit - Machine Learning · 1 min ·
Shifting to AI model customization is an architectural imperative | MIT Technology Review
Llms

Shifting to AI model customization is an architectural imperative | MIT Technology Review

In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every ...

MIT Technology Review · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime