[2602.09924] LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations

[2602.09924] LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2602.09924: LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations

Computer Science > Computation and Language arXiv:2602.09924 (cs) [Submitted on 10 Feb 2026 (v1), last revised 6 Apr 2026 (this version, v3)] Title:LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations Authors:William Lugoloobi, Thomas Foster, William Bankes, Chris Russell View a PDF of the paper titled LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations, by William Lugoloobi and 3 other authors View PDF HTML (experimental) Abstract:Running LLMs with extended reasoning on every problem is expensive, but determining which inputs actually require additional compute remains challenging. We investigate whether their own likelihood of success is recoverable from their internal representations before generation, and if this signal can guide more efficient inference. We train linear probes on pre-generation activations to predict policy-specific success on math and coding tasks, substantially outperforming surface features such as question length and TF-IDF. Using E2H-AMC, which provides both human and model performance on identical problems, we show that models encode a model-specific notion of difficulty that is distinct from human difficulty, and that this distinction increases with extended reasoning. Leveraging these probes, we demonstrate that routing queries across a pool of models can exceed the best-performing model whilst reducing inference cost by up to 70\% on MATH, showing that internal representations enable practi...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Anthropic Claude AI training model targets AI skills gap | ETIH EdTech News
Llms

Anthropic Claude AI training model targets AI skills gap | ETIH EdTech News

AI in education, edtech AI tools, and AI skills training drive Anthropic’s Claude curriculum. ETIH edtech news covers how AI fluency, wor...

AI Tools & Products · 6 min ·
I use ChatGPT every day — I stick to these 3 rules to protect my privacy
Llms

I use ChatGPT every day — I stick to these 3 rules to protect my privacy

I stick to three essential rules whenever I open up a new chat in ChatGPT to always protect my privacy and keep my data secure

AI Tools & Products · 9 min ·
Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute
Llms

Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute

AI Tools & Products · 3 min ·
Llms

Codex and Claude Code Can Work Together

AI Tools & Products ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime