[2601.13358] The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models
About this article
Abstract page for arXiv paper 2601.13358: The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models
Computer Science > Artificial Intelligence arXiv:2601.13358 (cs) This paper has been withdrawn by Samuel Anderson [Submitted on 19 Jan 2026 (v1), last revised 30 Mar 2026 (this version, v2)] Title:The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models Authors:Samuel Cyrenius Anderson View a PDF of the paper titled The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models, by Samuel Cyrenius Anderson No PDF available, click to view other formats Abstract:Scale does not uniformly improve reasoning - it restructures it. Analyzing 25,000+ chain-of-thought trajectories across four domains (Law, Science, Code, Math) and two scales (8B, 70B parameters), we discover that neural scaling laws trigger domain-specific phase transitions rather than uniform capability gains. Legal reasoning undergoes Crystallization: 45% collapse in representational dimensionality (d95: 501 -> 274), 31% increase in trajectory alignment, and 10x manifold untangling. Scientific and mathematical reasoning remain Liquid - geometrically invariant despite 9x parameter increase. Code reasoning forms a discrete Lattice of strategic modes (silhouette: 0.13 -> 0.42). This geometry predicts learnability. We introduce Neural Reasoning Operators - learned mappings from initial to terminal hidden states. In crystalline legal reasoning, our operator achieves 63.6% accuracy on held-out tasks via probe decoding, predicting reasoning endpoints without traversing interm...