[2509.03906] Toward Clinically Explainable AI for Medical Diagnosis: A Foundation Model with Human-Compatible Reasoning via Reinforcement Learning
About this article
Abstract page for arXiv paper 2509.03906: Toward Clinically Explainable AI for Medical Diagnosis: A Foundation Model with Human-Compatible Reasoning via Reinforcement Learning
Computer Science > Artificial Intelligence arXiv:2509.03906 (cs) [Submitted on 4 Sep 2025 (v1), last revised 2 Mar 2026 (this version, v2)] Title:Toward Clinically Explainable AI for Medical Diagnosis: A Foundation Model with Human-Compatible Reasoning via Reinforcement Learning Authors:Qika Lin, Yifan Zhu, Bin Pu, Ling Huang, Haoran Luo, Jingying Ma, Feng Wu, Kai He, Jiaxing Xu, Zhen Peng, Tianzhe Zhao, Fangzhi Xu, Jian Zhang, Zhonghong Ou, Erik Cambria, Swapnil Mishra, Mengling Feng View a PDF of the paper titled Toward Clinically Explainable AI for Medical Diagnosis: A Foundation Model with Human-Compatible Reasoning via Reinforcement Learning, by Qika Lin and 16 other authors View PDF HTML (experimental) Abstract:The clinical adoption of artificial intelligence (AI) in medical diagnostics is critically hampered by its black-box nature, which prevents clinicians from verifying the rationale behind automated decisions. To overcome this fundamental barrier, we introduce DeepMedix-R1, a foundation model (FM) for chest X-ray (CXR) interpretation that generates not only accurate diagnoses but also a transparent, step-by-step reasoning process grounded in specific visual evidence. Our methodology employs a sequential training strategy, beginning with instruction fine-tuning, followed by a cold-start phase to elicit reasoning capabilities. Critically, we then implement reinforcement learning with grounded rewards to meticulously refine the model, aligning both its diagnostic o...