[2509.22258] Beyond Classification Accuracy: Neural-MedBench and the Need for Deeper Reasoning Benchmarks
About this article
Abstract page for arXiv paper 2509.22258: Beyond Classification Accuracy: Neural-MedBench and the Need for Deeper Reasoning Benchmarks
Computer Science > Computer Vision and Pattern Recognition arXiv:2509.22258 (cs) [Submitted on 26 Sep 2025 (v1), last revised 4 Apr 2026 (this version, v5)] Title:Beyond Classification Accuracy: Neural-MedBench and the Need for Deeper Reasoning Benchmarks Authors:Miao Jing, Mengting Jia, Junling Lin, Zhongxia Shen, Huan Gao, Mingkun Xu, Shangyang Li View a PDF of the paper titled Beyond Classification Accuracy: Neural-MedBench and the Need for Deeper Reasoning Benchmarks, by Miao Jing and 6 other authors View PDF HTML (experimental) Abstract:Recent advances in vision-language models (VLMs) have achieved remarkable performance on standard medical benchmarks, yet their true clinical reasoning ability remains unclear. Existing datasets predominantly emphasize classification accuracy, creating an evaluation illusion in which models appear proficient while still failing at high-stakes diagnostic reasoning. We introduce Neural-MedBench, a compact yet reasoning-intensive benchmark specifically designed to probe the limits of multimodal clinical reasoning in neurology. Neural-MedBench integrates multi-sequence MRI scans, structured electronic health records, and clinical notes, and encompasses three core task families: differential diagnosis, lesion recognition, and rationale generation. To ensure reliable evaluation, we develop a hybrid scoring pipeline that combines LLM-based graders, clinician validation, and semantic similarity metrics. Through systematic evaluation of state-o...