[2601.12910] SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

[2601.12910] SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2601.12910: SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

Computer Science > Computation and Language arXiv:2601.12910 (cs) [Submitted on 19 Jan 2026 (v1), last revised 26 Mar 2026 (this version, v2)] Title:SciCoQA: Quality Assurance for Scientific Paper--Code Alignment Authors:Tim Baumgärtner, Iryna Gurevych View a PDF of the paper titled SciCoQA: Quality Assurance for Scientific Paper--Code Alignment, by Tim Baumg\"artner and 1 other authors View PDF HTML (experimental) Abstract:We present SciCoQA, a dataset for detecting discrepancies between scientific publications and their codebases to ensure faithful implementations. We construct SciCoQA from GitHub issues and reproducibility papers, and to scale our dataset, we propose a synthetic data generation method for constructing paper-code discrepancies. We analyze the paper-code discrepancies in detail and propose discrepancy types and categories to better understand the occurring mismatches. In total, our dataset consists of 635 paper-code discrepancies (92 real, 543 synthetic), covering the AI domain from real-world data and extending to Physics, Quantitative Biology, and other computational sciences through synthetic data. Our evaluation of 22 LLMs demonstrates the difficulty of SciCoQA, particularly for instances involving omitted paper details, long-context inputs, and data outside the models' pre-training corpus. The best-performing models in our evaluation, Gemini 3.1 Pro and GPT-5 Mini, detect only 46.7% of real-world paper-code discrepancies. Subjects: Computation and La...

Originally published on March 27, 2026. Curated by AI News.

Related Articles

Washington needs AI guardrails — now | Opinion
Ai Safety

Washington needs AI guardrails — now | Opinion

We need legislation that draws clear lines on what AI systems may and may not do on behalf of the United States government

AI Tools & Products · 3 min ·
[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining
Machine Learning

[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining

Abstract page for arXiv paper 2509.21385: Debugging Concept Bottleneck Models through Removal and Retraining

arXiv - Machine Learning · 4 min ·
[2512.00804] Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval
Llms

[2512.00804] Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval

Abstract page for arXiv paper 2512.00804: Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval

arXiv - AI · 4 min ·
[2509.24296] DiffuGuard: How Intrinsic Safety is Lost and Found in Diffusion Large Language Models
Llms

[2509.24296] DiffuGuard: How Intrinsic Safety is Lost and Found in Diffusion Large Language Models

Abstract page for arXiv paper 2509.24296: DiffuGuard: How Intrinsic Safety is Lost and Found in Diffusion Large Language Models

arXiv - AI · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime