[2509.10625] No Answer Needed: Predicting LLM Answer Accuracy from Question-Only Linear Probes

[2509.10625] No Answer Needed: Predicting LLM Answer Accuracy from Question-Only Linear Probes

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2509.10625: No Answer Needed: Predicting LLM Answer Accuracy from Question-Only Linear Probes

Computer Science > Computation and Language arXiv:2509.10625 (cs) [Submitted on 12 Sep 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:No Answer Needed: Predicting LLM Answer Accuracy from Question-Only Linear Probes Authors:Iván Vicente Moreno Cencerrado, Arnau Padrés Masdemont, Anton Gonzalvez Hawthorne, David Demitri Africa, Lorenzo Pacchiardi View a PDF of the paper titled No Answer Needed: Predicting LLM Answer Accuracy from Question-Only Linear Probes, by Iv\'an Vicente Moreno Cencerrado and 4 other authors View PDF HTML (experimental) Abstract:Do large language models (LLMs) anticipate when they will answer correctly? To study this, we extract activations after a question is read but before any tokens are generated, and train linear probes to predict whether the model's forthcoming answer will be correct. Across three open-source model families ranging from 7 to 70 billion parameters, projections on this "in-advance correctness direction" trained on generic trivia questions predict success in distribution and on diverse out-of-distribution knowledge datasets, indicating a deeper signal than dataset-specific spurious features, and outperforming black-box baselines and verbalised predicted confidence. Predictive power saturates in intermediate layers and, notably, generalisation falters on questions requiring mathematical reasoning. Moreover, for models responding "I don't know", doing so strongly correlates with the probe score, indicating that the same ...

Originally published on March 04, 2026. Curated by AI News.

Related Articles

Llms

I probably shouldn't be impressed, but I am.

So I just made this workout on a whiteboard and I was feeling lazy so I asked Claude to read it. And it did, almost flawlessly. I was and...

Reddit - Artificial Intelligence · 1 min ·
Llms

Claude Vulnerabilities but Solvable

I recognized that while I was using Claude that the inputs and decision making of the AI has perception of worry and concern for the user...

Reddit - Artificial Intelligence · 1 min ·
Llms

OpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought ab...

Reddit - Artificial Intelligence · 1 min ·
Llms

A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence.

SmolLM2 135M. Lenovo T14 CPU. No GPU. No RLHF. No BPE. Coherent, non-sycophantic, contextually appropriate output. First message. No prio...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime