[2508.18395] Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning
About this article
Abstract page for arXiv paper 2508.18395: Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning
Computer Science > Computation and Language arXiv:2508.18395 (cs) [Submitted on 25 Aug 2025 (v1), last revised 27 Feb 2026 (this version, v3)] Title:Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning Authors:Jungsuk Oh, Jay-Yoon Lee View a PDF of the paper titled Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning, by Jungsuk Oh and 1 other authors View PDF HTML (experimental) Abstract:Probabilistic decoding in Large Language Models (LLMs) often yields inconsistent outputs, particularly on complex or long-form questions. Self-Consistency (SC) mitigates this for short-form QA by majority voting over exact strings, whereas Universal Self-Consistency (USC) and Weighted Unigram Consistency Score (WUCS) extend to long-form responses but lose accuracy on short-form benchmarks. We introduce \textbf{Latent Self-Consistency (LSC)}, which selects the most semantically consistent response using learnable token embeddings. LSC's lightweight forward processing of summary tokens only introduces negligible runtime overhead (at most $0.9\%$) on top of standard decoding of the base LLM, and requires no changes to the model architecture. Across 6 short-form and 5 long-form reasoning benchmarks (e.g., MATH, MMLU, TruthfulQA), LSC surpasses SC, USC, and WUCS on both short-form and long-form on average performance, while adding negligible computational overhead on vanilla inference. These results position ...