[2603.24580] Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
About this article
Abstract page for arXiv paper 2603.24580: Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Computer Science > Computation and Language arXiv:2603.24580 (cs) [Submitted on 25 Mar 2026] Title:Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA Authors:Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, Tunazzina Islam View a PDF of the paper titled Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA, by Saahil Mathur and 4 other authors View PDF HTML (experimental) Abstract:Retrieval-augmented generation (RAG) systems are increasingly used to analyze complex policy documents, but achieving sufficient reliability for expert usage remains challenging in domains characterized by dense legal language and evolving, overlapping regulatory frameworks. We study the application of RAG to AI governance and policy analysis using the AI Governance and Regulatory Archive (AGORA) corpus, a curated collection of 947 AI policy documents. Our system combines a ColBERT-based retriever fine-tuned with contrastive learning and a generator aligned to human preferences using Direct Preference Optimization (DPO). We construct synthetic queries and collect pairwise preferences to adapt the system to the policy domain. Through experiments evaluating retrieval quality, answer relevance, and faithfulness, we find that domain-specific fine-tuning improves retrieval metrics but does not consistently improve end-to-end question answering performance. In some cases, stronger retrieval counterintuiti...