[2508.18395] Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning

[2508.18395] Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2508.18395: Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning

Computer Science > Computation and Language arXiv:2508.18395 (cs) [Submitted on 25 Aug 2025 (v1), last revised 27 Feb 2026 (this version, v3)] Title:Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning Authors:Jungsuk Oh, Jay-Yoon Lee View a PDF of the paper titled Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning, by Jungsuk Oh and 1 other authors View PDF HTML (experimental) Abstract:Probabilistic decoding in Large Language Models (LLMs) often yields inconsistent outputs, particularly on complex or long-form questions. Self-Consistency (SC) mitigates this for short-form QA by majority voting over exact strings, whereas Universal Self-Consistency (USC) and Weighted Unigram Consistency Score (WUCS) extend to long-form responses but lose accuracy on short-form benchmarks. We introduce \textbf{Latent Self-Consistency (LSC)}, which selects the most semantically consistent response using learnable token embeddings. LSC's lightweight forward processing of summary tokens only introduces negligible runtime overhead (at most $0.9\%$) on top of standard decoding of the base LLM, and requires no changes to the model architecture. Across 6 short-form and 5 long-form reasoning benchmarks (e.g., MATH, MMLU, TruthfulQA), LSC surpasses SC, USC, and WUCS on both short-form and long-form on average performance, while adding negligible computational overhead on vanilla inference. These results position ...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge
Llms

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge

The popular combination of OpenClaw and Claude Code is being severed now that Anthropic has announced it will start charging subscribers ...

The Verge - AI · 4 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime