[2604.08571] Robust Reasoning Benchmark
About this article
Abstract page for arXiv paper 2604.08571: Robust Reasoning Benchmark
Computer Science > Machine Learning arXiv:2604.08571 (cs) [Submitted on 26 Mar 2026] Title:Robust Reasoning Benchmark Authors:Pavel Golikov, Evgenii Opryshko, Gennady Pekhimenko, Mark C. Jeffrey View a PDF of the paper titled Robust Reasoning Benchmark, by Pavel Golikov and 3 other authors View PDF HTML (experimental) Abstract:While Large Language Models (LLMs) achieve high performance on standard mathematical benchmarks, their underlying reasoning processes remain highly overfit to standard textual formatting. We propose a perturbation pipeline consisting of 14 techniques to evaluate robustness of LLM reasoning. We apply this pipeline to AIME 2024 dataset and evalute 8 state-of-the-art models on the resulting benchmark. While frontier models exhibit resilience, open weights reasoning models suffer catastrophic collapses (up to 55% average accuracy drops across perturbations and up to 100% on some), exposing structural fragility. To further disentangle mechanical parsing failures from downstream reasoning failures, we strictly isolate the models' working memory capacity by forcing models to solve multiple unperturbed mathematical problems sequentially within a single context window. Our results indicate that open weight models ranging from 7B to 120B parameters and Claude Opus 4.6 exhibit accuracy decay on subsequent problems. This degradation demonstrates that intermediate reasoning steps permanently pollute standard dense attention mechanisms. We argue that to achieve re...