[2602.21857] Distill and Align Decomposition for Enhanced Claim Verification
Summary
This paper presents a novel reinforcement learning approach to enhance claim verification by optimizing decomposition quality and verifier alignment, achieving state-of-the-art results.
Why It Matters
With the rise of misinformation, effective claim verification is crucial. This research addresses the limitations of existing methods by introducing a framework that improves both the quality of decomposed claims and their verification accuracy, making it significant for AI applications in fact-checking and information integrity.
Key Takeaways
- Introduces a reinforcement learning method for claim verification.
- Optimizes both decomposition quality and verification performance.
- Achieves a macro-F1 score of 71.75%, outperforming existing methods.
- Utilizes structured reasoning and teacher-distilled exemplars.
- Enables smaller language models to perform at state-of-the-art levels.
Computer Science > Artificial Intelligence arXiv:2602.21857 (cs) [Submitted on 25 Feb 2026] Title:Distill and Align Decomposition for Enhanced Claim Verification Authors:Jabez Magomere, Elena Kochkina, Samuel Mensah, Simerjot Kaur, Fernando Acero, Arturo Oncevay, Charese H. Smiley, Xiaomo Liu, Manuela Veloso View a PDF of the paper titled Distill and Align Decomposition for Enhanced Claim Verification, by Jabez Magomere and 8 other authors View PDF Abstract:Complex claim verification requires decomposing sentences into verifiable subclaims, yet existing methods struggle to align decomposition quality with verification performance. We propose a reinforcement learning (RL) approach that jointly optimizes decomposition quality and verifier alignment using Group Relative Policy Optimization (GRPO). Our method integrates: (i) structured sequential reasoning; (ii) supervised finetuning on teacher-distilled exemplars; and (iii) a multi-objective reward balancing format compliance, verifier alignment, and decomposition quality. Across six evaluation settings, our trained 8B decomposer improves downstream verification performance to (71.75%) macro-F1, outperforming prompt-based approaches ((+1.99), (+6.24)) and existing RL methods ((+5.84)). Human evaluation confirms the high quality of the generated subclaims. Our framework enables smaller language models to achieve state-of-the-art claim verification by jointly optimising for verification accuracy and decomposition quality. Comme...