[2506.17585] Cite Pretrain: Retrieval-Free Knowledge Attribution for Large Language Models
About this article
Abstract page for arXiv paper 2506.17585: Cite Pretrain: Retrieval-Free Knowledge Attribution for Large Language Models
Computer Science > Artificial Intelligence arXiv:2506.17585 (cs) [Submitted on 21 Jun 2025 (v1), last revised 3 Apr 2026 (this version, v3)] Title:Cite Pretrain: Retrieval-Free Knowledge Attribution for Large Language Models Authors:Yukun Huang, Sanxing Chen, Jian Pei, Manzil Zaheer, Bhuwan Dhingra View a PDF of the paper titled Cite Pretrain: Retrieval-Free Knowledge Attribution for Large Language Models, by Yukun Huang and 4 other authors View PDF HTML (experimental) Abstract:Trustworthy language models should provide both correct and verifiable answers. However, citations generated directly by standalone LLMs are often unreliable. As a result, current systems insert citations by querying an external retriever at inference time, introducing latency, infrastructure dependence, and vulnerability to retrieval noise. We explore whether LLMs can be made to reliably attribute to the documents seen during continual pretraining without test-time retrieval, by revising the training process. To study this, we construct CitePretrainBench, a benchmark that mixes real-world corpora (Wikipedia, Common Crawl, arXiv) with novel documents and probes both short-form (single-fact) and long-form (multi-fact) citation tasks. Our approach follows a two-stage process: (1) continual pretraining to index factual knowledge by binding it to persistent document identifiers; and (2) instruction tuning to elicit citation behavior. We introduce Active Indexing for the first stage, which creates genera...