[2512.05658] Multilingual Medical Reasoning for Question Answering with Large Language Models
About this article
Abstract page for arXiv paper 2512.05658: Multilingual Medical Reasoning for Question Answering with Large Language Models
Computer Science > Computation and Language arXiv:2512.05658 (cs) [Submitted on 5 Dec 2025 (v1), last revised 30 Mar 2026 (this version, v2)] Title:Multilingual Medical Reasoning for Question Answering with Large Language Models Authors:Pietro Ferrazzi, Aitor Soroa, Rodrigo Agerri View a PDF of the paper titled Multilingual Medical Reasoning for Question Answering with Large Language Models, by Pietro Ferrazzi and 2 other authors View PDF Abstract:Large Language Models (LLMs) with reasoning capabilities have recently demonstrated strong potential in medical Question Answering (QA). Existing approaches are largely English-focused and primarily rely on distillation from general-purpose LLMs, raising concerns about the reliability of their medical knowledge. In this work, we present a method to generate multilingual reasoning traces based on medical knowledge extracted from Wikipedia. We produce 500k traces in English, Italian, and Spanish, using a retrieval-augmented generation approach over medical information from Wikipedia. The traces are generated to solve medical questions drawn from MedQA and MedMCQA, which we extend to Italian and Spanish. We test our pipeline in both in-domain and out-of-domain settings across Medical QA benchmarks, and demonstrate that our reasoning traces improve performance both when utilized via in-context learning (few-shot) and supervised fine-tuning, yielding state-of-the-art results among 8B-parameter LLMs. We believe that these resources can...