[2509.21028] Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles
About this article
Abstract page for arXiv paper 2509.21028: Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles
Computer Science > Artificial Intelligence arXiv:2509.21028 (cs) [Submitted on 25 Sep 2025 (v1), last revised 1 Mar 2026 (this version, v3)] Title:Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles Authors:Miao Li, Alexander Gurung, Irina Saparina, Mirella Lapata View a PDF of the paper titled Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles, by Miao Li and 3 other authors View PDF HTML (experimental) Abstract:We introduce SciTrek, a diagnostic question-answering benchmark designed to probe long-context numerical reasoning in large language models (LLMs). Existing long-context benchmarks mostly focus on simple information retrieval, rely on artificial contexts, or leave numerical reasoning unexplored. SciTrek addresses these limitations through questions that require counting, sorting, aggregating, and comparing information across multiple full-text scientific articles. Questions are automatically generated by formulating them as SQL queries over a database constructed from article metadata (titles, authors, and references), with ground-truth answers obtained via query execution. This design provides verifiable reasoning traces for fine-grained error analysis and enables efficient scaling to longer contexts with minimal human supervision. Extensive experiments on thirteen frontier open-weight and proprietary LLMs reveal that SciTrek poses a significant challenge: even the best-performing model ...