[2510.25867] Synthesizing High-Quality Visual Question Answering from Medical Documents with Generator-Verifier LMMs
Summary
This paper presents MedVLSynther, a framework for synthesizing high-quality visual question answering (VQA) from medical documents, enhancing the training of large multimodal models (LMMs).
Why It Matters
The development of MedVLSynther addresses the challenge of limited high-quality datasets in medical VQA, enabling better training of models that can reason over images and text. This advancement has significant implications for improving medical diagnostics and educational tools in healthcare.
Key Takeaways
- MedVLSynther synthesizes VQA items from open biomedical literature.
- The generator-verifier framework ensures high-quality, clinically valid questions.
- Training with verified data significantly improves model accuracy on medical benchmarks.
- The approach is reproducible and privacy-preserving, utilizing open literature.
- The framework has potential applications in enhancing medical education and diagnostics.
Computer Science > Machine Learning arXiv:2510.25867 (cs) [Submitted on 29 Oct 2025 (v1), last revised 18 Feb 2026 (this version, v2)] Title:Synthesizing High-Quality Visual Question Answering from Medical Documents with Generator-Verifier LMMs Authors:Xiaoke Huang, Ningsen Wang, Hui Liu, Xianfeng Tang, Yuyin Zhou View a PDF of the paper titled Synthesizing High-Quality Visual Question Answering from Medical Documents with Generator-Verifier LMMs, by Xiaoke Huang and 4 other authors View PDF HTML (experimental) Abstract:Large Multimodal Models (LMMs) are increasingly capable of answering medical questions that require joint reasoning over images and text, yet training general medical VQA systems is impeded by the lack of large, openly usable, high-quality corpora. We present MedVLSynther, a rubric-guided generator-verifier framework that synthesizes high-quality multiple-choice VQA items directly from open biomedical literature by conditioning on figures, captions, and in-text references. The generator produces self-contained stems and parallel, mutually exclusive options under a machine-checkable JSON schema; a multi-stage verifier enforces essential gates (self-containment, single correct answer, clinical validity, image-text consistency), awards fine-grained positive points, and penalizes common failure modes before acceptance. Applying this pipeline to PubMed Central yields MedSynVQA: 13,087 audited questions over 14,803 images spanning 13 imaging modalities and 28 ana...