[2602.15029] Symmetry in language statistics shapes the geometry of model representations

[2602.15029] Symmetry in language statistics shapes the geometry of model representations

arXiv - Machine Learning 4 min read Article

Summary

This article explores how symmetry in language statistics influences the geometric representation of models in machine learning, particularly in large language models (LLMs).

Why It Matters

Understanding the geometric structures that emerge from language statistics can enhance our grasp of how neural networks process and represent language. This insight is crucial for improving model performance and robustness in natural language processing tasks.

Key Takeaways

  • Language statistics exhibit translation symmetry affecting model geometry.
  • Geometric structures like circular representations of time persist despite perturbations.
  • The underlying continuous latent variable governs co-occurrence statistics in language models.

Computer Science > Machine Learning arXiv:2602.15029 (cs) [Submitted on 16 Feb 2026] Title:Symmetry in language statistics shapes the geometry of model representations Authors:Dhruva Karkada, Daniel J. Korchinski, Andres Nava, Matthieu Wyart, Yasaman Bahri View a PDF of the paper titled Symmetry in language statistics shapes the geometry of model representations, by Dhruva Karkada and 4 other authors View PDF HTML (experimental) Abstract:Although learned representations underlie neural networks' success, their fundamental properties remain poorly understood. A striking example is the emergence of simple geometric structures in LLM representations: for example, calendar months organize into a circle, years form a smooth one-dimensional manifold, and cities' latitudes and longitudes can be decoded by a linear probe. We show that the statistics of language exhibit a translation symmetry -- e.g., the co-occurrence probability of two months depends only on the time interval between them -- and we prove that the latter governs the aforementioned geometric structures in high-dimensional word embedding models. Moreover, we find that these structures persist even when the co-occurrence statistics are strongly perturbed (for example, by removing all sentences in which two months appear together) and at moderate embedding dimension. We show that this robustness naturally emerges if the co-occurrence statistics are collectively controlled by an underlying continuous latent variable. W...

Related Articles

Llms

Claude Max 20x usage hit 40% by Monday noon — how does Codex CLI compare?

I'm on Claude Max (the $100/mo plan) and noticed something that surprised me. By Monday noon I had already used 40% of the 20x monthly li...

Reddit - Artificial Intelligence · 1 min ·
How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch
Llms

How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch

Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.

TechCrunch - AI · 10 min ·
Anthropic Restricts Claude Agent Access Amid AI Automation Boom in Crypto
Llms

Anthropic Restricts Claude Agent Access Amid AI Automation Boom in Crypto

AI Tools & Products · 7 min ·
Is cutting ‘please’ when talking to ChatGPT better for the planet? An expert explains
Llms

Is cutting ‘please’ when talking to ChatGPT better for the planet? An expert explains

AI Tools & Products · 5 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime