[2602.11908] When Should LLMs Be Less Specific? Selective Abstraction for Reliable Long-Form Text Generation
Summary
This paper introduces Selective Abstraction (SA), a framework for improving the reliability of long-form text generated by LLMs by selectively reducing detail in uncertain content, enhancing factual accuracy while preserving meaning.
Why It Matters
As LLMs become integral in various applications, ensuring their reliability is crucial, especially in high-stakes environments. The proposed SA framework addresses the challenge of balancing specificity and reliability, potentially increasing user trust and adoption.
Key Takeaways
- Selective Abstraction (SA) improves LLM reliability by reducing detail in uncertain content.
- The framework enhances factual accuracy while maintaining the original meaning of text.
- Atom-wise SA outperforms existing methods, improving the area under the risk-coverage curve by up to 27.73%.
Computer Science > Artificial Intelligence arXiv:2602.11908 (cs) [Submitted on 12 Feb 2026 (v1), last revised 13 Feb 2026 (this version, v2)] Title:When Should LLMs Be Less Specific? Selective Abstraction for Reliable Long-Form Text Generation Authors:Shani Goren, Ido Galil, Ran El-Yaniv View a PDF of the paper titled When Should LLMs Be Less Specific? Selective Abstraction for Reliable Long-Form Text Generation, by Shani Goren and 2 other authors View PDF HTML (experimental) Abstract:LLMs are widely used, yet they remain prone to factual errors that erode user trust and limit adoption in high-risk settings. One approach to mitigate this risk is to equip models with uncertainty estimation mechanisms that abstain when confidence is low. However, this binary "all-or-nothing" approach is excessively restrictive in long-form settings, often discarding valuable information. We introduce Selective Abstraction (SA), a framework that enables LLMs to trade specificity for reliability by selectively reducing the detail of uncertain content. We first formalize SA through the lenses of selective risk and coverage. We then propose Atom-wise Selective Abstraction, a claim-level instantiation that decomposes responses into atomic claims (short, self-contained statements each expressing a single fact) and replaces uncertain atoms with higher confidence, less specific abstractions. To evaluate this framework, we develop a novel end-to-end pipeline for open-ended generation that instantiate...