[2509.25184] Incentive-Aligned Multi-Source LLM Summaries
Summary
The paper presents an innovative framework called Truthful Text Summarization (TTS) aimed at enhancing the factual accuracy of multi-source summaries generated by large language models (LLMs).
Why It Matters
As LLMs become integral to information synthesis, ensuring the accuracy and reliability of generated content is crucial. TTS addresses the challenge of conflicting information from various sources, promoting a system where truthful reporting is incentivized, thereby improving the overall quality of automated summaries.
Key Takeaways
- TTS decomposes summaries into atomic claims to evaluate source reliability.
- It employs a peer-prediction mechanism to align incentives for truthful reporting.
- The framework enhances factual accuracy without requiring ground-truth labels.
- Experiments demonstrate improved robustness and fluency in generated summaries.
- TTS disincentivizes manipulation by rewarding informative agreement among sources.
Computer Science > Computation and Language arXiv:2509.25184 (cs) [Submitted on 29 Sep 2025 (v1), last revised 25 Feb 2026 (this version, v2)] Title:Incentive-Aligned Multi-Source LLM Summaries Authors:Yanchen Jiang, Zhe Feng, Aranyak Mehta View a PDF of the paper titled Incentive-Aligned Multi-Source LLM Summaries, by Yanchen Jiang and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly used in modern search and answer systems to synthesize multiple, sometimes conflicting, texts into a single response, yet current pipelines offer weak incentives for sources to be accurate and are vulnerable to adversarial content. We introduce Truthful Text Summarization (TTS), an incentive-aligned framework that improves factual robustness without ground-truth labels. TTS (i) decomposes a draft synthesis into atomic claims, (ii) elicits each source's stance on every claim, (iii) scores sources with an adapted multi-task peer-prediction mechanism that rewards informative agreement, and (iv) filters unreliable sources before re-summarizing. We establish formal guarantees that align a source's incentives with informative honesty, making truthful reporting the utility-maximizing strategy. Experiments show that TTS improves factual accuracy and robustness while preserving fluency, aligning exposure with informative corroboration and disincentivizing manipulation. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.A...