[2507.11768] LLMs are Bayesian, In Expectation, Not in Realization

[2507.11768] LLMs are Bayesian, In Expectation, Not in Realization

arXiv - Machine Learning 4 min read Article

Summary

This paper explores the Bayesian nature of large language models (LLMs) in expectation rather than realization, highlighting the impact of positional encodings on model performance and exchangeability.

Why It Matters

Understanding the Bayesian characteristics of LLMs is crucial for improving their design and performance. This research provides insights into how positional encodings affect model behavior, which can inform future developments in machine learning and AI applications.

Key Takeaways

  • Positional encodings in LLMs disrupt exchangeability, affecting Bayesian behavior.
  • Performance should be evaluated based on expected outcomes over exchangeable multisets.
  • Empirical findings show significant gaps between expectation and realization in LLM outputs.

Statistics > Machine Learning arXiv:2507.11768 (stat) [Submitted on 15 Jul 2025 (v1), last revised 22 Feb 2026 (this version, v2)] Title:LLMs are Bayesian, In Expectation, Not in Realization Authors:Leon Chlon, Zein Khamis, Maggie Chlon, Mahdi El Zein, MarcAntonio M. Awada View a PDF of the paper titled LLMs are Bayesian, In Expectation, Not in Realization, by Leon Chlon and 3 other authors View PDF HTML (experimental) Abstract:Exchangeability-based martingale diagnostics have been used to question Bayesian explanations of transformer in-context learning. We show that these violations are compatible with Bayesian/MDL behavior once we account for a basic architectural fact: positional encodings break exchangeability. Accordingly, the relevant baseline is performance in expectation over orderings of an exchangeable multiset, not performance under every fixed ordering. In a Bernoulli microscope (under explicit regularity assumptions), we bound the permutation-induced dispersion detected by martingale diagnostics (Theorem~3.4) while proving near-optimal expected MDL/compression over permutations (Theorem~3.6). Empirically, black-box next-token log-probabilities from an Azure OpenAI deployment exhibit nonzero expectation--realization gaps that decay with context length (mean 0.74 at $n = 10$ to 0.26 at $n = 50$; 95\% confidence intervals), and permutation averaging reduces order-induced standard deviation with a $k^{-1/2}$ trend (Figure~2). Controlled from-scratch training abla...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge
Llms

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge

The popular combination of OpenClaw and Claude Code is being severed now that Anthropic has announced it will start charging subscribers ...

The Verge - AI · 4 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime