[2509.21028] Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles

[2509.21028] Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2509.21028: Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles

Computer Science > Artificial Intelligence arXiv:2509.21028 (cs) [Submitted on 25 Sep 2025 (v1), last revised 1 Mar 2026 (this version, v3)] Title:Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles Authors:Miao Li, Alexander Gurung, Irina Saparina, Mirella Lapata View a PDF of the paper titled Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles, by Miao Li and 3 other authors View PDF HTML (experimental) Abstract:We introduce SciTrek, a diagnostic question-answering benchmark designed to probe long-context numerical reasoning in large language models (LLMs). Existing long-context benchmarks mostly focus on simple information retrieval, rely on artificial contexts, or leave numerical reasoning unexplored. SciTrek addresses these limitations through questions that require counting, sorting, aggregating, and comparing information across multiple full-text scientific articles. Questions are automatically generated by formulating them as SQL queries over a database constructed from article metadata (titles, authors, and references), with ground-truth answers obtained via query execution. This design provides verifiable reasoning traces for fine-grained error analysis and enables efficient scaling to longer contexts with minimal human supervision. Extensive experiments on thirteen frontier open-weight and proprietary LLMs reveal that SciTrek poses a significant challenge: even the best-performing model ...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

[2603.17839] How do LLMs Compute Verbal Confidence
Llms

[2603.17839] How do LLMs Compute Verbal Confidence

Abstract page for arXiv paper 2603.17839: How do LLMs Compute Verbal Confidence

arXiv - AI · 4 min ·
[2603.15970] 100x Cost & Latency Reduction: Performance Analysis of AI Query Approximation using Lightweight Proxy Models
Llms

[2603.15970] 100x Cost & Latency Reduction: Performance Analysis of AI Query Approximation using Lightweight Proxy Models

Abstract page for arXiv paper 2603.15970: 100x Cost & Latency Reduction: Performance Analysis of AI Query Approximation using Lightweight...

arXiv - AI · 4 min ·
[2603.10062] Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead
Llms

[2603.10062] Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead

Abstract page for arXiv paper 2603.10062: Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead

arXiv - AI · 3 min ·
[2603.09085] Not All News Is Equal: Topic- and Event-Conditional Sentiment from Finetuned LLMs for Aluminum Price Forecasting
Llms

[2603.09085] Not All News Is Equal: Topic- and Event-Conditional Sentiment from Finetuned LLMs for Aluminum Price Forecasting

Abstract page for arXiv paper 2603.09085: Not All News Is Equal: Topic- and Event-Conditional Sentiment from Finetuned LLMs for Aluminum ...

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime