[2603.01343] PanCanBench: A Comprehensive Benchmark for Evaluating Large Language Models in Pancreatic Oncology
About this article
Abstract page for arXiv paper 2603.01343: PanCanBench: A Comprehensive Benchmark for Evaluating Large Language Models in Pancreatic Oncology
Computer Science > Computation and Language arXiv:2603.01343 (cs) [Submitted on 2 Mar 2026] Title:PanCanBench: A Comprehensive Benchmark for Evaluating Large Language Models in Pancreatic Oncology Authors:Yimin Zhao, Sheela R. Damle, Simone E. Dekker, Scott Geng, Karly Williams Silva, Jesse J Hubbard, Manuel F Fernandez, Fatima Zelada-Arenas, Alejandra Alvarez, Brianne Flores, Alexis Rodriguez, Stephen Salerno, Carrie Wright, Zihao Wang, Pang Wei Koh, Jeffrey T. Leek View a PDF of the paper titled PanCanBench: A Comprehensive Benchmark for Evaluating Large Language Models in Pancreatic Oncology, by Yimin Zhao and 15 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety. As patients and clinicians increasingly use LLMs for guidance on complex conditions such as pancreatic cancer, evaluation must extend beyond general medical knowledge. Existing frameworks, such as HealthBench, rely on simulated queries and lack disease-specific depth. Moreover, high rubric-based scores do not ensure factual correctness, underscoring the need to assess hallucinations. We developed a human-in-the-loop pipeline to create expert rubrics for de-identified patient questions from the Pancreatic Cancer Action Network (PanCAN). The resulting benchmark, PanCanBench, includes 3,130 question-specific criteria across 282 authe...