[2601.13358] The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models

[2601.13358] The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2601.13358: The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models

Computer Science > Artificial Intelligence arXiv:2601.13358 (cs) This paper has been withdrawn by Samuel Anderson [Submitted on 19 Jan 2026 (v1), last revised 30 Mar 2026 (this version, v2)] Title:The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models Authors:Samuel Cyrenius Anderson View a PDF of the paper titled The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models, by Samuel Cyrenius Anderson No PDF available, click to view other formats Abstract:Scale does not uniformly improve reasoning - it restructures it. Analyzing 25,000+ chain-of-thought trajectories across four domains (Law, Science, Code, Math) and two scales (8B, 70B parameters), we discover that neural scaling laws trigger domain-specific phase transitions rather than uniform capability gains. Legal reasoning undergoes Crystallization: 45% collapse in representational dimensionality (d95: 501 -> 274), 31% increase in trajectory alignment, and 10x manifold untangling. Scientific and mathematical reasoning remain Liquid - geometrically invariant despite 9x parameter increase. Code reasoning forms a discrete Lattice of strategic modes (silhouette: 0.13 -> 0.42). This geometry predicts learnability. We introduce Neural Reasoning Operators - learned mappings from initial to terminal hidden states. In crystalline legal reasoning, our operator achieves 63.6% accuracy on held-out tasks via probe decoding, predicting reasoning endpoints without traversing interm...

Originally published on April 01, 2026. Curated by AI News.

Related Articles

Llms

AI helped me build a custom PC and 4 apps in 6 months with zero coding experience

Mid-October, early morning at work. I was hunting for a podcast to throw on while I worked and stumbled into something about what AI coul...

Reddit - Artificial Intelligence · 1 min ·
Llms

I thought of something while cooking up a simple RL AI. Please Validate it. [R]

So, I was trying to build a simple AI when I thought of, 'How could I give an AI some emotions? ' This led to one thing after another, an...

Reddit - Machine Learning · 1 min ·
Llms

Open-source list of GenAI-related incidents

I am sharing this open-source list of cases where the ethics of GenAI use were put in the spotlight, in the hopes of sparking discussion ...

Reddit - Artificial Intelligence · 1 min ·
Llms

I built a repo for implementing and training LLM architectures from scratch in minimal PyTorch — contributions welcome! [P]

Hey everyone, I've been working on a repo where I implement large language model architectures using the simplest PyTorch code possible. ...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime